EU Referendum


Coronavirus: number crunchers


19/03/2020




When the UK suffered its Salmonella enteritidis PT4 epidemic back in the late 1980s and early '90s, it was part of a global phenomenon. Strongly associated with poultry, the type seems to have emerged in the United States before moving to Europe, with the UK incidence for a while setting the pace.

Yet, while almost all European countries reported increased incidences, there was one notable exception. The Dutch, seemingly, were exceptionally fortunate in being the only country to enjoy a declining incidence of the illness in humans, bucking the global trend.

This certainly was not as a result of lower prevalence in the national broiler flock. I had access to the figures over the period, as part of my PhD studies, when I'd spent time in Holland looking at the poultry industry.

What I later found was that, as the salmonella incidence started to increase in Europe, the Dutch government had changed the rules for monitoring salmonella illness in humans.

The way the system had worked was that those people who became ill with suspect food poisoning could be referred by their doctors, and get a basic test done free of charge by the state public health laboratories.

However, under the new rules, if doctors wanted to know the type of salmonella involved (which was not necessary for the management of the illness), there was a charge levied, as a result of which many doctors didn't bother. Also under the new rules, though, unless the salmonella had been typed, the notification wasn't accepted on the national database. Thus, most illness was no longer officially recorded.

It was by that means that the incidence was kept so low. But then Holland, with a human population of less than a quarter of the UK, had a broiler industry of similar size, with massive exports around the world. Therefore, it was put to me, the Dutch couldn't afford to follow in the path of the UK and indulge in a salmonella scare, so the authorities made sure it didn't happen.

What this does, of course, is illustrate just how fragile public health data can be. And even without governments massaging the figures (which is more common than one might think), there are many errors that can creep in when trying to monitor an outbreak.

Not least, when an outbreak is first detected, the disease may be under-reported because so few physicians are aware of it, and don't ask for tests of affected patients.

As the headlines increase, though, two things often happen. First, more people suspect they might have the disease and present to their GPs or hospitals, demanding to be tested. And then the physicians themselves are more willing to test, sending more samples to the laboratories. As an outbreak progresses, more laboratories may then offer testing facilities.

The cumulative effect of this may be substantially to inflate the rise in the disease incidence, even to the extent that, as control measures are introduced and begin to bring down the incidence of the disease, the statistics show a continuing increase – giving the impression that the controls aren't working.

On the other hand, if a disease is increasing in the community at such a rate that it outstrips the capacity of the health services to test for the causal organism, or if the testing programme does not reflect the distribution of the disease (even where the number of tests is increasing), the incidence of a disease may be substantially under-recorded.

To some extent, this may not matter too much if the under-recording and other errors are constant. Although the figures may not reflect actual incidence, the profile of the epidemic curve will be accurate enough to give some idea of what is happening.

But where the errors are not constant, where the testing criteria change through the course of an outbreak, or where laboratory procedures are modified, the whole shape of the epidemic curve may be distorted. And, through the course of an epidemic, there may be multiple distortions.

With the current Covid-19 epidemic in the UK, most of these biases are in some way affecting the figures being published by the government, so much so that it is not possible to estimate the true incidence. And that is without the significant problem of asymptomatic and mild cases, which don't come to the attention of the surveillance system.

To an extent, poor morbidity data can be compensated for by watching the mortality figures, although daily totals can be highly misleading, The figures are a trailing indicator, reflecting a spread of infection in a period days or weeks before they are notified. Furthermore, not all people who die of the disease are tested, introducing further uncertainties.

As to the behaviour of the illness in the community, there will be a number of factors which will influence the recorded incidence, some of which will be known to the surveillance authorities. Others won't.

Some control measures introduced may have an effect, but some can make matters worse in unexpected ways. For instance, the schools are to be closed from Friday onwards, ostensibly to reduce the spread of infection, but if this were to result in a large number of children being looked after by their grandparents, this could actually lead to an increased incidence, as asymptomatic children infected a highly vulnerable cohort.

And, while in a national – or regional context – there may be multiple errors and variations which make the figures difficult to interpret or in some cases totally unreliable, the situation is more problematic when trying to compare rates of infection between different countries.

Not only will there be differences in the way data are collected, but in the way respective health services operate (or don't). These can significantly affect both morbidity and mortality figures. And then there may be demographic variations or cultural differences which impact on the way infectious diseases are spread, and on the distribution within the communities. These may also affect incidences.

In short, both morbidity and mortality statistics collected during an outbreak – and especially those reliant on laboratory confirmation – are extremely blunt instruments. They can just as easily mislead as inform and most certainly lack the degree of precision that will enable the effects of specific control measures to be identified.

Herein, therefore, lies an age-old divide between groups charged with the management of communicable disease. The oldest of these are the so-called "shoe-leather" epidemiologists, those versed in the practical measures required to bring outbreaks under control. From their real-world approach and experience, they tend to be more aware of the fragility of the data collected, and of its limitations.

The other group comprises the so-called "scientific" epidemiologists, mostly academics who rely on computer models and number crunching to give them answers which the mere mortals on the ground are rarely allowed to challenge.

And yet those with practical experience will know that the very fragility of the data means that there is no realistic prospect of fine-tuning control measures in a major outbreak. The controls themselves are blunt instruments, the effects of which are uncertain, and thus must be deployed with maximum effect.

Something of the differences in approach could be illustrated by two gunners in a battleship seeking to sink an enemy warship. The "scientific" gunner might argue that, as long as the particular vulnerability of the target was identified, a single shell might suffice.

The "shoe-leather" gunner, on the other hand, conscious that so many things can go wrong in the void between theory and practice, will insist on firing a full broadside and another, and then another, keeping up the rate of fire until the enemy ship finally sinks.

Unfortunately, the "scientific" epidemiologists invariably enjoy the prestige of title and position, and so often have the ear of government and the media. Their charts, graphs and clever graphics are beguilingly persuasive to the gullible and those who are unaware of the fragility of the data. They also play to an atavistic need to feel that we are in control against such an implacable enemy.

The truth of the matter, though, is that we're firing blind, making decisions without reliable data. And no amount of pretty graphics will change that. Thus, the idea that we can finesse the data to guide us in fine-tuning our response to this epidemic is a cruel fantasy. The only safe way of proceeding is to throw everything we have into the fight in the hope that some of what we do will have the effects needed and save lives.

For all that, this isn't how it's going to work in this epidemic. The tools for dealing successfully with outbreaks have long deteriorated through lack of use, the wisdom and experience gained from dealing with past outbreaks has dissipated, and in any case have fallen out of fashion. The number-crunchers are ruling the roost.

I may have hinted as much before, but this does not end well.