Sunday, May 17, 2020

Herd immunity comes faster when some people are more infectious

By now, we all know about "R0" and how it needs to drop below 1.0 for use to achieve "herd immunity" to the COVID virus. 

The estimates I've seen is that the "R0" (or "R") for the COVID-19 virus is around 4. That means, in a susceptible population with no interventions like social distancing, the average infected person will pass the virus on to 4 other people. Each of those four passes it on to four others, and each of those 16 newly-infected people pass it on to four others, and the incidence grows exponentially.

But, suppose that 75 percent of the population is immune, perhaps because they've already been infected. Then, each infected person can pass the virus on to only one other person (since the other three who would otherwise be infected, are immune). That means R0 has dropped from 4 to 1. With R0=1, the number of infected people will stay level. As more people become immune, R drops further, and the disease eventually dies out in the population.

That's the argument most experts have been making, so far -- that we can't count on herd immunity any time soon, because we'd need 75 percent of the population to get infected first.

But that can't be right, as "Zvi" points out in a post on LessWrong*. 

(*I recommend LessWrong as an excellent place to look to for good reasoning on coronavirus issues, with arguments that make the most sense.)

That's because not everyone is equal in terms of how much they're likely to spread the virus. In other words, everyone has his or her own personal R0. Those with a higher R0 -- people who don't wash their hands much, or shake hands with a lot of people, or just encounter more people for face-to-face interactions -- are also likely to become infected sooner. When they become immune, they drop the overall "societal" R0 more than if they were average.

If you want to reduce home runs in baseball by 75 percent, you don't have to eliminate 75 percent of plate appearances. You can probably do it by eliminating as little as, say, 25 percent, if you get rid of the top power hitters only.

As Zvi writes,


"Seriously, stop thinking it takes 75% infected to get herd immunity...

"... shame on anyone who doesn’t realize that you get partial immunity much bigger than the percent of people infected. 

"General reminder that people’s behavior and exposure to the virus, and probably also their vulnerability to it, follow power laws. When half the population is infected and half isn’t, the halves aren’t chosen at random. They’re based on people’s behaviors.

"Thus, expect much bigger herd immunity effects than the default percentages."

-------

But to what extent does the variance in individual behavior affect the spread of the virus?  Is it just a minimal difference, or is it big enough that, for instance, New York City (with some 20 percent of people having been exposed to the virus) is appreciably closer to herd immunity than we think?

To check, I wrote a simulation. It is probably in no way actually realistic in terms of how well it models the actual spread of COVID, but I think we can learn something from the differences in what the model shows for different assumptions about individual R0.

I created 100,000 simulated people, and gave them each a "spreader rating" to represent their R0. The actual values of the ratings don't matter, except relative to the rest of the population. I created a fixed number of "face-to-face interactions" each day, and the chance of being involved in one is directly proportional to the number. So, people with a rating of "8" are four times as likely to have a chance to spread/catch the virus than people with a rating of "2". 

Each of those interactions, if it turns out person was infected and one wasn't, there was a fixed probability of the infection spreading to the other person.

For every simulation, I jigged the numbers to get the R0 to be around 4 for the first part of the pandemic, from the start until the point where around three percent of the population was infected. 

The simulation started with 10 people newly infected. I assumed that infected people could spread the virus only for the first 10 days after infection. 

--------

The four simulations were:

1. Everyone has the same rating.

2. Everyone rolls a die until "1" or "2" comes up, and their spreader rating is the number of rolls it took. (On average, that's 3 rolls. But in a hundred thousand trials, you get some pretty big outliers. I think there was typically a 26 or higher -- 26 consecutive high rolls happens one time in 37,877.)

3. Same as #2, except that 1 percent of the population is a superspreader, with a spreader rating of 30. The first nine infected people were chosen randomly, but the tenth was always set to "superspreader."

4. Same as #3, but the superspreaders got an 80 instead of a 30.

--------

In the first simulation, everyone got the same rating. With an initial R0 of around 4, it did, indeed, take around 75 percent of the population to get infected before R0 dropped below 1.0. 

Overall, around 97 percent of the population wound up being infected before the virus disappeared completely.

Here's the graph:





The point where R0 drops below 1.0 is where the next day's increase is smaller than the previous day's increase. It's hard to eyeball that on the curve, but it's around day 32, where the total crosses the 75,000 mark.

-------

As I mentioned, I jigged the other three curves so that for the first days, they had about the same R0 of around 4, so as to match the "everyone the same" graph.

Here's the graph of all four simulations for those first 22 days:





Aside from the scale, they're pretty similar to the curves we've seen in real life. Which means, that, based on the data we've seen so far, we can't really tell from the numbers which simulation is closest to our true situation.

But ... after that point, as Zvi explained, the four curves do diverge. Here they are in full:





Big differences, in the direction that Zvi explained. The bigger the variance in individual R0, the more attenuated the progression of the virus.

Which makes sense. All four curves had an R0 of around 4.0 at the beginning. But the bottom curve was 99 percent with an average of 3 encounters, and 1 percent superspreaders with an average of 80 encounters. Once those superspreaders are no longer superspreading, the R0 plummets. 

In other words: herd immunity brings the curve under control by reducing opportunity for infection. In the bottom curve, eliminating the top 1% of the population reduces opportunity by 40%. In the top curve, eliminating 1% of the population reduces opportunity only by 2%.

-------

For all four curves, here's where R0 dropped below 1.0:

75% -- all people the same
58% -- all different, no superspreaders
44% -- all different, superspreaders 10x average
20% -- all different, superspreaders 26x average

And here's the total number of people who ever got infected:

97% -- all people the same
81% -- all different, no superspreaders
65% -- all different, superspreaders 10x average
33% -- all different, superspreaders 26x average

--------

Does it seem counterintuitive that the more superspreaders, the better the result?  How can more infecting make things better?

It doesn't. More *initial* infecting makes things better *only holding the initial R0 constant.*  

If the aggregate R0 is still only 4.0 after including superspreaders, it must mean that the non-superspreaders have an R0 significantly less than 4.0. You can think of a "R=4.0 with superspreaders" society like maybe a "R=2.0" society that's been infected by 1% gregarious handshaking huggers and church-coughers.

In other words, the good news is: if everyone were at the median, the overall R0 would be less than 4. It just looks like R0=4 because we're infested by dangerous superspreaders. Those superspreaders will more quickly turn benign and lower our R0 faster.

---------

So, the shape of the distribution of spreaders matters a great deal. Of course, we don't know the shape of our distribution, so it's hard to estimate which line in the chart we're closest to. 

But we *do* know that we at least a certain amount of variance -- some people shake a lot of hands, some people won't wear masks, some people are probably still going to hidden dance parties. So I think we can conclude that we'll need significantly less than 75 percent to get to herd immunity.

How much less?  I guess you could study data sources and try to estimate. I've seen at least one non-wacko argument that says New York City, with an estimated infection rate of at least 20 percent, might be getting close. Roughly speaking, that would be something like the fourth line on the graph, the one on the bottom.  

Which line is closest, if not that one?  My gut says ... given that we know the top line is wrong, and from what we know about human nature ... the second line from the top is a reasonable conservative assumption. Changing my default from 75% to 58% seems about right to me. But I'm pulling that out of my gut. The very end part of my gut, to be more precise. 

At least we know for sure is that the 75%, the top line of the graph, must be too pessimistic.  To estimate how far pessimistic, we need more data and more arguments. 



Labels:

Wednesday, May 06, 2020

Regression to higher ground


We know that if an MLB team wins 76 games in a particular season, it's probably a better team than its record indicates. To get its talent from its record, we have to regress to the mean.

Tango has often said that straightforward regression to the mean sometimes isn't right -- you have to regress to the *specific* mean you're concerned with. If Wade Boggs hits .280, you shouldn't regress him towards the league average of .260. You should regress him towards his own particular mean, which is more like .310 or something.

This came up when I was figuring regression to the mean for park factors. To oversimplify for purposes of this discussion: the distribution of hitters' parks in MLB is bimodal. There's Coors Field, and then everyone else. Roughly like this random pic I stole from the internet:




Now, suppose you have a season of Coors Field that comes in at 110. If you didn't know the distribution was bimodal, you might regress it back to the mean of 100, by moving it to the left. But if you *do* know that the distribution is bimodal, and you can see the 110 belongs to the hump on the right, you'd regress it to the Coors mean of 113, by moving it to the right.

But there are times when there is no obvious mean to regress to.

--------

You have a pair of perfectly fair 9-sided dice. You want to count the number of rolls it takes before you roll your first snake eyes (which has a 1 in 81 chance each roll). On average, you expect it to take 81 rolls, but that can vary a lot. 

You don't have a perfect count of how many rolls it took, though. Your counter is randomly inaccurate with an SD of 6.4 rolls (coincidentally the same as the SD of luck for team wins).

You start rolling. Eventually you get snake eyes, and your counter estimates that it took 76 rolls. The mean is 81. What's your best estimate of the actual number? 

This time, it should be LOWER than 76. You actually have to regress AWAY from the mean.

-------

Let's go back to the usual case for a second, where a team wins 76 games. Why do we expect its talent to be higher than 76? Because there are two possibilities:

(a) its talent was lower than 76, and it got lucky; or
(b) its talent was higher than 76, and it got unlucky.

But (b) is more likely than (a), because the true number will be higher than 76 more often than it'll be lower than 76. 

You can see that from this graph that represents distribution of team talent:



The blue bars are the times that talent was less than 76, and the team got lucky.  The pink bars are the times the talent was more than 76, and the team got unlucky.

The blue bars around 76 are shorter than the pink bars around 76. That means better teams getting unlucky are more common than worse teams getting lucky, so the average talent must be higher than 76.

But the dice case is different. Here's the distribution of when the first snake-eyes (1 in 81 chance) appears:




The mean is still 81, but, this time, the curve slopes down at 76, not up.

Which means: it's more likely that you rolled less than 76 times and counted too high, than that you rolled more than 76 times and counted too low. 

Which means that to estimate the actual number of rolls, you have to regress *down* from 76, which is *away* from the mean of 81.

--------

That logic --let's call it the "Dice Method" -- seems completely correct, right? 

But, the standard "Tango Method" contradicts it.

The SD of the distribution of the dice graph is around 80.5. The SD of the counting error is 6.4. So we can calculate:

SD(true)     = 80.5
SD(error)    =  6.4
--------------------
SD(observed) = 80.75

By the Tango method, we have to regress by (6.4/80.75)^2, which is less than 1% of the way to the mean. Small, but still towards the mean!

So we have two answers, that appear to contradict each other:

-- Dice Method: regress away from the mean
-- Tango Method: regress towards the mean

Which is correct?

They both are.

The Tango Method is correct on average. The Dice Method is correct in this particular case.

If you don't know how many rolls you counted, you use the Tango Method.

If you DO know that the count was 76 rolls, you use the Dice Method.

------

Side note:

The Tango Method's regression to the mean looked wrong to me, but I think I figured out where it comes from.

Looking at the graph at a quick glance, it looks like you should always regress to the left, because the left side of every point is always higher than the right side of every point. That means that if you're below the mean of 81, you regress away from the mean (left). If you're above the mean of 81, you regress toward the mean (still left).

But, there are a lot more datapoints to the left of 81 than to the right of 81 -- by a ratio of about 64 percent to 36 percent. So, overall, it looks like the average should be regressing away from the mean.

Except ... it's not true that the left is always higher than the right. Suppose your counter said "1". You know the correct count couldn't possibly have been zero or less, so you have to regress to the right. 

Even if your counter said "2" ... sure, a true count of 1 is more likely than a true count of 3. but 4, 5, and 6 are more likely than 0, -1, or -2. So again you have to regress to the right.

Maybe the zero/negative logic is a factor when you have, say, 8 tosses or less, just to give a gut estimate. Those might constitute, say, 10 percent of all snake eyes rolled. 

So, the overall "regress less than 1 percent towards the mean of 81" is the average of:

-- 36% regress left  towards the mean a bit (>81)
-- 54% regress left  away from the mean a bit (9-81)
-- 10% regress right towards the mean a lot (< 8)
   -----------------------------------------------------
-- Overall average: regress towards the mean a tiny bit.

--------

The "Tango Method" and the "Dice Method" are just consequences of Bayes' Theorem that are easier to implement than doing all the Bayesian calculations every time. The Tango Method is a mathematically precise consequence of Bayes Theorem, and the Dice Method is an heuristic from eyeballing. Tango's "regress to the specific mean" is another Bayes heuristic.

We can reduce the three methods into one by noting what they have in common -- they all move the estimate from lower on the curve to higher on the curve. So, instead of "regress to the mean," maybe we can say

"regress to higher ground."

That's sometimes how I think of Bayes' Theorem in my own mind. In fact, I think you can explain Bayes exactly, as a more formal method of figuring where the higher ground is, by explicitly calculating how much to weight the closer ground relative to the distant ground. 


Labels: , ,