Not sure if 10 cases per 100k is a tipping point, or just where you can start seeing the effects of policies and behaviors more readily.
What really matters is the R value at any given time, which is how many other people an infected person infects, on average. The is a function of the policies and behaviors of a given region or population (R0) and the percentage of people susceptible. Let's say both NY and TX implement policies that lead to behaviors that increase R0 to 1.5, say, opening restaurants, but in NY, only 50% of restaurant workers and patrons are still susceptible, but in TX, 97% are, because there was never a real outbreak in TX. Well, in that situation, cases in NY will shrink, and cases in TX will grow exponentially, even though the behavior of the two populations is identical.
So if you have R-values that are pretty-close-to-but-still-above 1.0, and very few cases, you won't see much in the data. At small numbers, daily fluctuations look indistinguishable from random noise. But maybe when case counts get up to 10 per 100k, that low-exponent exponential growth gets easier to see.
Also, on data sources, I find outbreak.info to be very helpful, as it allows you to look at trends by MSA in addition to state and county.
We can backtrack from the estimated Rt and infected percentage to get an estimate of R0 for a region, and this yields some interesting information.
Using the estimates from covid19-projections.com, the peak Rt in Maricopa County (center of the AZ outbreak) was 1.25. On June 1, they estimate that 3.5% of the county was infected. Since Rt = R0*(uninfected %), this yields 1.30 for the estimated R0 on June 1.
Currently, the Rt for NYC is estimated at 1.01 and the estimated infected percentage is 26.4%. If we solve for current estimated R0, we get 1.37(!)
In other words, having a high percentage of already infected people allows a region to "get away with" behaviors that would cause frightening exponential growth in a region with a low percentage of already infected people. We should keep this in mind when assigning blame to people and regions for being "careless".
There are two caveats that come to mind for the above example, which tend to cut in opposite directions.
The first is that R0 is not just a function of behavior per se. It also includes hard-coded factors like population density. Obviously, if the populations of the Phoenix metro area and NYC had the same behavior, you would see a significantly higher R0 in NYC (how much higher, though, I don't know). Two regions having the same R0 does *not* mean that they have the same behavior if other factors are significantly different.
The second (much more speculative) has to do with T-cell resistance. If a certain percentage of the population is immune *without showing an antibody response*, then the relationship between R0 and Rt changes. Now we have Rt = R0*(uninfected % of those not already immune).
If we (completely arbitrarily) assume that 50% of the population has full T-cell immunity to the virus (i.e. cannot get sick, but won't show up in antibody surveys), we can re-run the above calculation. In Maricopa county, we go from 3.5% of the population being infected on June 1 to 7% of the *susceptible* population being infected, while NYC increases from 26.4% to 52.8%. From here, we get revised estimates - in Maricopa county, June 1 R0 = 1.25 / 0.93 = 1.34, while current NYC R0 = 1.01 / 0.472 = 2.14
I should stress that research into T-cell resistance/immunity is very immature, and important questions (how much resistance does it give, what percentage of the population has it, does a person with it really not produce measurable levels of antibodies) have not come close to being answered. It does, however, help illustrate the folly of treating Rt as monocausally driven by population behavior.
The T-cell resistance theory is really interesting, since it could make the herd immunity threshold much, much lower. Places like NYC, Italy and Spain could already be at a herd immunity level in certain sub-groups.
You can't measure R0, but you CAN estimate R(t) by looking at case counts and understanding the duration of the contagious period. And, until you have a vaccine and start quickly moving the needle on % susceptible, it's R(t) you care about.
I guess what I was getting at was, it seems like if you ramp up testing in a certain area and catch more infections, it will make the R(t) look artificially high, correct?
Not sure if 10 cases per 100k is a tipping point, or just where you can start seeing the effects of policies and behaviors more readily.
What really matters is the R value at any given time, which is how many other people an infected person infects, on average. The is a function of the policies and behaviors of a given region or population (R0) and the percentage of people susceptible. Let's say both NY and TX implement policies that lead to behaviors that increase R0 to 1.5, say, opening restaurants, but in NY, only 50% of restaurant workers and patrons are still susceptible, but in TX, 97% are, because there was never a real outbreak in TX. Well, in that situation, cases in NY will shrink, and cases in TX will grow exponentially, even though the behavior of the two populations is identical.
So if you have R-values that are pretty-close-to-but-still-above 1.0, and very few cases, you won't see much in the data. At small numbers, daily fluctuations look indistinguishable from random noise. But maybe when case counts get up to 10 per 100k, that low-exponent exponential growth gets easier to see.
Also, on data sources, I find outbreak.info to be very helpful, as it allows you to look at trends by MSA in addition to state and county.
We can backtrack from the estimated Rt and infected percentage to get an estimate of R0 for a region, and this yields some interesting information.
Using the estimates from covid19-projections.com, the peak Rt in Maricopa County (center of the AZ outbreak) was 1.25. On June 1, they estimate that 3.5% of the county was infected. Since Rt = R0*(uninfected %), this yields 1.30 for the estimated R0 on June 1.
Currently, the Rt for NYC is estimated at 1.01 and the estimated infected percentage is 26.4%. If we solve for current estimated R0, we get 1.37(!)
In other words, having a high percentage of already infected people allows a region to "get away with" behaviors that would cause frightening exponential growth in a region with a low percentage of already infected people. We should keep this in mind when assigning blame to people and regions for being "careless".
There are two caveats that come to mind for the above example, which tend to cut in opposite directions.
The first is that R0 is not just a function of behavior per se. It also includes hard-coded factors like population density. Obviously, if the populations of the Phoenix metro area and NYC had the same behavior, you would see a significantly higher R0 in NYC (how much higher, though, I don't know). Two regions having the same R0 does *not* mean that they have the same behavior if other factors are significantly different.
The second (much more speculative) has to do with T-cell resistance. If a certain percentage of the population is immune *without showing an antibody response*, then the relationship between R0 and Rt changes. Now we have Rt = R0*(uninfected % of those not already immune).
If we (completely arbitrarily) assume that 50% of the population has full T-cell immunity to the virus (i.e. cannot get sick, but won't show up in antibody surveys), we can re-run the above calculation. In Maricopa county, we go from 3.5% of the population being infected on June 1 to 7% of the *susceptible* population being infected, while NYC increases from 26.4% to 52.8%. From here, we get revised estimates - in Maricopa county, June 1 R0 = 1.25 / 0.93 = 1.34, while current NYC R0 = 1.01 / 0.472 = 2.14
I should stress that research into T-cell resistance/immunity is very immature, and important questions (how much resistance does it give, what percentage of the population has it, does a person with it really not produce measurable levels of antibodies) have not come close to being answered. It does, however, help illustrate the folly of treating Rt as monocausally driven by population behavior.
The T-cell resistance theory is really interesting, since it could make the herd immunity threshold much, much lower. Places like NYC, Italy and Spain could already be at a herd immunity level in certain sub-groups.
Here's a pre-print on T-cell resistance vs HIT:
https://www.medrxiv.org/content/10.1101/2020.07.15.20154294v1
Good layman-level intro to the T-cell question:
https://www.reuters.com/article/us-health-coronavirus-immunesystem-idUSKBN24B1D8
Question on R0. How accurately can we really measure that in real time? Is there some formula?
You can't measure R0, but you CAN estimate R(t) by looking at case counts and understanding the duration of the contagious period. And, until you have a vaccine and start quickly moving the needle on % susceptible, it's R(t) you care about.
Multiple sites are attempting to measure real-time R(t), including covid19-projections.com and rt.live
I guess what I was getting at was, it seems like if you ramp up testing in a certain area and catch more infections, it will make the R(t) look artificially high, correct?
Most of the modelers attempt to correct for testing volume in some way or another.