Stanford U Study: covid19 Prevalence 50-85x known cases

28,683 Views | 269 Replies | Last: 5 yr ago by Player To Be Named Later
TXAggie2011
How long do you want to ignore this user?
AG
Quote:

And I am very interested into why when the virus got here is such a big deal?
I don't understand what you're asking.
oragator
How long do you want to ignore this user?
By the way, on the Roosevelt they tested nearly everyone. 60 percent were symptom free. And that doesn't account for how many showed symptoms later. It is also among a population that was younger and healthier, less likely to show symptoms probably. But even taken at face value, the death rate would still be north of 1 percent, or ten times the flu, and that doesn't even take into account the higher transmission rate.

https://www.reuters.com/article/us-health-coronavirus-usa-military-sympt-idUSKCN21Y2GB
dermdoc
How long do you want to ignore this user?
AG
TXAggie2011 said:

Quote:

And I am very interested into why when the virus got here is such a big deal?
I don't understand what you're asking.
So why is it a big deal if I think the virus was here earlier than some people? It is almost like there is a template.

And would not that be good news? Like the Stanford study seemed to indicate?
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
TXAggie2011
How long do you want to ignore this user?
AG
dermdoc said:

TXAggie2011 said:

Quote:

And I am very interested into why when the virus got here is such a big deal?
I don't understand what you're asking.
So why is it a big deal if I think the virus was here earlier than some people? It is almost like there is a template.
You're asking why people are debating with you? Why do you keep debating with them?

If you believe you have a strong case, make the case.
dermdoc
How long do you want to ignore this user?
AG
TXAggie2011 said:

dermdoc said:

TXAggie2011 said:

Quote:

And I am very interested into why when the virus got here is such a big deal?
I don't understand what you're asking.
So why is it a big deal if I think the virus was here earlier than some people? It is almost like there is a template.
You're asking why people are debating with you? Why do you keep debating with them?

If you believe you have a strong case, make the case.
Nobody knows. So I can not. And neither can anybody else.

And edited to add that I respect your posts. I am not an engineer and have no idea how y'all think.
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
DTP02
How long do you want to ignore this user?
AG
oragator said:

By the way, on the Roosevelt they tested nearly everyone. 60 percent were symptom free. And that doesn't account for how many showed symptoms later. It is also among a population that was younger and healthier, less likely to show symptoms probably. But even taken at face value, the death rate would still be north of 1 percent, or ten times the flu, and that doesn't even take into account the higher transmission rate.

https://www.reuters.com/article/us-health-coronavirus-usa-military-sympt-idUSKCN21Y2GB


Are you talikng about the death rate on the Roosevelt? It's 1 out of 600+ infected at present.

I'm not sure what you're discussing here
Zobel
How long do you want to ignore this user?
AG
So a couple of things. One is that this study was authored by skeptics - three of them (at least) have penned high profile op-eds. Doesn't mean their findings aren't valid, but it does mean we should expect them to be more likely to find what they're looking for (this is a well-observed phenomenon - nice article talking about this).

The other is that the sensitivity of the test is pretty bad. The manufacturer said 91.8% which is bad enough but their locally tested data was 67.6%! Specificity is good, 99.5%.

Sensitivity is the probability that a positive test result means a true positive.

Specificity is the probability that a negative test means a true negative.

The validity of their tests is extremely dependent on the actual (unknown) prevalence in the population. For example (using the manufacturer's numbers - the most generous / accurate assumption)

Let's say the true prevalence is 10%.

For any particular test result you'd expect 9.6% positive and 90.4% negative.
For any particular positive result, it has a 95% chance of being true.
For any particular negative result, it has a 99% chance of being true.

But... what the disease prevalence is 1%?

For any particular test result you'd expect 1.4% positive and 98.6% negative.
For any particular positive result, it has a 65% chance of being true!
For any particular negative result, it has a 99% chance of being true.

Same numbers... but with the 67.6% sensitivity

10%
For any particular test result you'd expect 7.2% positive and 92.8% negative.
For any particular positive result, it has a 93.6% chance of being true.
For any particular negative result, it has a 96.5% chance of being true.

1%
For any particular test result you'd expect 1.1% positive and 98.9% negative.
For any particular positive result, it has a 57.7% chance of being true.
For any particular negative result, it has a 99.6% chance of being true.


They got 50 positive cases out of 3,300 tests, which is a crude prevalence of 1.5%.
Post removed:
by user
Windy City Ag
How long do you want to ignore this user?
AG
Quote:

One is that this study was authored by skeptics

I don't even know what this means . . . .there is not agreed upon knowledge base. This is all incredibly unknown so it is really really hard to be a "skeptic" This team offers one of many opinions They offered up a hypothesis and went out to test its validity.
oragator
How long do you want to ignore this user?
I am saying that the percent of asymptomatic cases from real world examples like this can be used to inform on much larger groups, with the knowledge that it's one sample set amongst a relatively small population. But there was another study in Italy that came up with similar numbers. And neither study accounted for those that developed symptoms later.
The Roosevelt crew will have a far lower rate of death than the general population - not only are they young, which already has a low death rate, they are far less likely to be obese or have other underlying conditions, or they wouldn't have made it in to the service or through the first few months.
But a nearly wholly young relatively healthy group like this are probably the best case as far as asymptomatics go, and it's still nowhere near what was in the Stanford study, but even if we are overestimating the death rate by half, it's still a very scary number.

Honestly as a scientist being a skeptic right now is where the reward is. It's a contrary voice in a sea of panic so it will get lots of attention, and we will likely never get to an infection rate high enough to definitively prove them wrong. More power to them, it's not going to change much either way anyway.
NASAg03
How long do you want to ignore this user?
k2aggie07 said:

So a couple of things. One is that this study was authored by skeptics - three of them (at least) have penned high profile op-eds. Doesn't mean their findings aren't valid, but it does mean we should expect them to be more likely to find what they're looking for (this is a well-observed phenomenon - nice article talking about this).

The other is that the sensitivity of the test is pretty bad. The manufacturer said 91.8% which is bad enough but their locally tested data was 67.6%! Specificity is good, 99.5%.

Sensitivity is the probability that a positive test result means a true positive.

Specificity is the probability that a negative test means a true negative.

The validity of their tests is extremely dependent on the actual (unknown) prevalence in the population. For example (using the manufacturer's numbers - the most generous / accurate assumption)

Let's say the true prevalence is 10%.

For any particular test result you'd expect 9.6% positive and 90.4% negative.
For any particular positive result, it has a 95% chance of being true.
For any particular negative result, it has a 99% chance of being true.

But... what the disease prevalence is 1%?

For any particular test result you'd expect 1.4% positive and 98.6% negative.
For any particular positive result, it has a 65% chance of being true!
For any particular negative result, it has a 99% chance of being true.

Same numbers... but with the 67.6% sensitivity

10%
For any particular test result you'd expect 7.2% positive and 92.8% negative.
For any particular positive result, it has a 93.6% chance of being true.
For any particular negative result, it has a 96.5% chance of being true.

1%
For any particular test result you'd expect 1.1% positive and 98.9% negative.
For any particular positive result, it has a 57.7% chance of being true.
For any particular negative result, it has a 99.6% chance of being true.


They got 50 positive cases out of 3,300 tests, which is a crude prevalence of 1.5%.
Who was involved in this study?

John P. A. Ioannidis, Professor of Medicine, of Health Research and Policy and of Biomedical Data Science, at Stanford University School of Medicine, and a Professor of Statistics at Stanford University School of Humanities and Sciences.

Ioannidis has received numerous awards and honorary titles and he is a member of the US National Academy of Medicine, of the European Academy of Sciences and Arts and an Einstein Fellow.

Ioannidis's 2005 paper "Why Most Published Research Findings are False" is the most downloaded paper in the Public Library of Science and is considered foundational to the field of metascience.

Jay Bhattacharya is a Professor of Medicine at Stanford University. He is a research associate at the National Bureau of Economics Research, a senior fellow at the Stanford Institute for Economic Policy Research, and at the Stanford Freeman Spogli Institute. He holds courtesy appointments as Professor in Economics and in Health Research and Policy. He directs the Stanford Center on the Demography of Health and Aging. Dr. Bhattacharya's research focuses on the economics of health care around the world with a particular emphasis on the health and well-being of vulnerable populations. Dr. Bhattacharya's peer-reviewed research has been published in economics, statistics, legal, medical, public health, and health policy journals. He holds an MD and PhD in economics from Stanford University.

They have earned the right to question the current narrative and seek out answers to a very complex problem.

But their doubts about the panic surrounding this pandemic don't fit YOUR bias, so of course you don't believe them.

Tell me again what your credentials are, and why your data reduction and conclusions are more worthwhile than theirs?
Mike Shaw - Class of '03
72 colo ag
How long do you want to ignore this user?
AG
They tested everyone in town. However the test they used was created locally and I have no idea how the CDC or another medical facility would treat this test
DTP02
How long do you want to ignore this user?
AG
oragator said:

I am saying that the percent of asymptomatic cases from real world examples like this can be used to inform on much larger groups, with the knowledge that it's one sample set amongst a relatively small population. But there was another study in Italy that came up with similar numbers. And neither study accounted for those that developed symptoms later.
The Roosevelt crew will have a far lower rate of death than the general population - not only are they young, which already has a low death rate, they are far less likely to be obese or have other underlying conditions, or they wouldn't have made it in to the service or through the first few months.
But a nearly wholly young relatively healthy group like this are probably the best case as far as asymptomatics go, and it's still nowhere near what was in the Stanford study, but even if we are overestimating the death rate by half, it's still a very scary number.

Honestly as a scientist being a skeptic right now is where the reward is. It's a contrary voice in a sea of panic so it will get lots of attention, and we will likely never get to an infection rate high enough to definitively prove them wrong. More power to them, it's not going to change much either way anyway.


I'm still not following your logic here talking about the death rate on the Roosevelt. It's one person away from being 0%.

But my bigger question is why are you assuming that the people who popped a positive on the antibody test were all asymptomatic. I haven't seen anything that says that was the case.
Stymied
How long do you want to ignore this user?
AG
Exactly. Attack the science not the people. If you don't like their result and your first statement is to call out their motive, how can I trust your intent?
DTP02
How long do you want to ignore this user?
AG
k2aggie07 said:

So a couple of things. One is that this study was authored by skeptics - three of them (at least) have penned high profile op-eds. Doesn't mean their findings aren't valid, but it does mean we should expect them to be more likely to find what they're looking for (this is a well-observed phenomenon - nice article talking about this).

The other is that the sensitivity of the test is pretty bad. The manufacturer said 91.8% which is bad enough but their locally tested data was 67.6%! Specificity is good, 99.5%.

Sensitivity is the probability that a positive test result means a true positive.

Specificity is the probability that a negative test means a true negative.

The validity of their tests is extremely dependent on the actual (unknown) prevalence in the population. For example (using the manufacturer's numbers - the most generous / accurate assumption)

Let's say the true prevalence is 10%.

For any particular test result you'd expect 9.6% positive and 90.4% negative.
For any particular positive result, it has a 95% chance of being true.
For any particular negative result, it has a 99% chance of being true.

But... what the disease prevalence is 1%?

For any particular test result you'd expect 1.4% positive and 98.6% negative.
For any particular positive result, it has a 65% chance of being true!
For any particular negative result, it has a 99% chance of being true.

Same numbers... but with the 67.6% sensitivity

10%
For any particular test result you'd expect 7.2% positive and 92.8% negative.
For any particular positive result, it has a 93.6% chance of being true.
For any particular negative result, it has a 96.5% chance of being true.

1%
For any particular test result you'd expect 1.1% positive and 98.9% negative.
For any particular positive result, it has a 57.7% chance of being true.
For any particular negative result, it has a 99.6% chance of being true.


They got 50 positive cases out of 3,300 tests, which is a crude prevalence of 1.5%.


I felt like I was reading a climactic repudiation of this study, but then your takeaway at the end is that the results might show more like 30-48 times the prevalence than thought even at a crude prevalence that is lower than the study showed? That would still be massive news.
oragator
How long do you want to ignore this user?
I'm not talking about the death rate on the Roosevelt, only the asymptomatic rate.
The post that I was responding to was saying that this Stanford study meant that the real death rate was potentially very small compared to current estimates because 50 times more people had the virus than we think, I was arguing otherwise. If we are correctly capturing roughly half of the actual cases eventually in our daily numbers as the Ship info suggests (under the assumption that most symptomatic people will likely eventually get tested), our death expected rate should only be off at most by half. And half of two or three percent is still a lot of people. It also means we are light years from herd immunity, or any of the other reasons to let our guard down.

That's all.
Windy City Ag
How long do you want to ignore this user?
AG
Quote:

I felt like I was reading a repudiation of this study, but then the takeaway at the end is that the results might show more like 30-48 times the prevalence than thought? That would still be massive news.


Think so...I haven't seen the actual confirmed cases in Santa Clara, but there punchline is that they are much, much lower than what this study signals as the real number of cases.

DTP02
How long do you want to ignore this user?
AG
oragator said:

I'm not talking about the death rate on the Roosevelt, only the asymptomatic rate.
The post that I was responding to was saying that this Stanford study meant that the real death rate was potentially very small compared to current estimates because 50 times more people had the virus than we think, I was arguing otherwise. If we are correctly capturing roughly half of the actual cases eventually in our daily numbers as the Ship info suggests (under the assumption that most symptomatic people will likely eventually get tested), our death expected rate should only be off at most by half. And half of two or three percent is still a lot of people. It also means we are light years from herd immunity, or any of the other reasons to let our guard down.

That's all.


I don't know why you would assume that most symptomatic people would be tested. That's not the reality in the US to this point at all. The operative policy in most US locales to this point is to tell symptomatic patients not to get tested.
Zobel
How long do you want to ignore this user?
AG
I say skeptic because three of them wrote editorials saying essentially the disease was being overblown, and offered some crude estimates for severity that could imply the severity was much worse than the 3.4% number. Bendavid and Bhattacharya wrote one in the WSJ, Ionnadis wrote one for stat magazine. It doesn't mean they're wrong, it means you know what they were expecting to find (high prevalence -> low severity) and should consider that when reading the study. Don't misconstrue what I'm saying. I don't think they did anything wrong, but it is a truism that people generally find the result they're looking for. That's why we do meta-analysis and replication.
PJYoung
How long do you want to ignore this user?
AG


That's the former head of the FDA.
Zobel
How long do you want to ignore this user?
AG

Quote:

Who was involved in this study?

They have earned the right to question the current narrative and seek out answers to a very complex problem.

But their doubts about the panic surrounding this pandemic don't fit YOUR bias, so of course you don't believe them.

Tell me again what your credentials are, and why your data reduction and conclusions are more worthwhile than theirs?
Where did I say anything about my bias? I showed theirs, but I never mentioned anything about what I think the prevalence is.

I didn't question any part of their work. I'm not doing any data reduction or offering any conclusions. I'm explaining the implications of the sensitivity they're having to work with. That is math, it's not an accusation. They account for the math I'm describing in their paper, but the implications of the sensitivity number are probably not immediately obvious to people who don't deal with Bayesian probability (like me - I checked because I'm not really used to doing sensitivity / specificity and I was curious).
Zobel
How long do you want to ignore this user?
AG
No repudiation, just showing the implication of the *extremely wide* range of sensitivity they used in three scenairos. Even the most generous is really tough for a disease where we may have <10%. The sensitivity of your test is about the same as the prevalence in the population.

Imagine you have a sensitivity of 90%. Prevalence of 1%. You test 100 people, you get 10-11 positives. That means each positive is only 10% chance to be a true positive. They prevalence has a big impact on how useful their findings are. They know this, but I imagine a lot of people don't.

The crude prevalence was simply the number of positives they got. 50 positive tests out of 3300 = 1.5%. You have to adjust for sensitivity and specificity for the two kinds of test errors. I assume they did so correctly, but when you're measuring the width of a hair with a yardstick there's only so far you can get.
Zobel
How long do you want to ignore this user?
AG
Other unknown is whether the test they used is positive to other coronaviruses. I scanned the paper but I didn't see it mentioned.
SirLurksALot
How long do you want to ignore this user?
PJYoung said:



That's the former head of the FDA.


2% to 5% would still mean that total infections are between 9 and 23 times greater than the current number of confirmed cases.
DTP02
How long do you want to ignore this user?
AG
PJYoung said:



That's the former head of the FDA.


This also seems like an attempt to repudiate the implications of the study.

5% of the country (and higher in hot spots) would be 16m people infected in the US. I haven't seen many estimates that high, so I'm not sure why he frames it that way.

But that would put the mortality rate at, what, around .2%?
AggieMD95
How long do you want to ignore this user?
AG
ETFan said:

marloag said:

Making the mortality rate somewhere close to that of the flu. Interesting


We can look to NY to see this clearing isn't like the flu.


Bottom line :
Worse than flu but much less virulent than once feared
Zobel
How long do you want to ignore this user?
AG
They analyzed the results as follows:

First, raw prevalence (50/3300 = 1.5%).
Then they normalized their dataset for demographics - that got it up to 2.81%. I'm not sure I understand that, to be honest. Random is random, and it seems re-weighting your random set is introducing noise. Not sure why zip and demographic gets you closer to true population unless you know that it should be uniformly (or not!) distributed for those things. But whatever, its 2.81%.

They used three scenarios S1, S2, S3. S1 is the manufacturer's data, S2 is their estimate and S3 is a combination of the two.

S1 (91.8% sensitivity, 99.5% specificity)
S2 (67.6% sensitivity, 100% specificity)
S3 (80.3% sensitivity, 99.5% specificity)

S2 looks suspect to me because of that 100% specificity, but I guess that's why they gave us all three.

Found a really cool calculator:
https://epitools.ausvet.com.au/trueprevalence

All cases, 3300 sample size, 50 positives (not including their demographic adjustment)

S1 (91.8% sensitivity, 99.5% specificity) - true prevalence 1.11%
S2 (67.6% sensitivity, 100.% specificity) - true prevalence 2.24%
S3 (80.3% sensitivity, 99.5% specificity) - true prevalence 1.27%

Figured out as sensitivity drops, true prevalence goes up for a given number of positives as specificity stays the same. You're missing more, but not ruling out any different. Anyway, those make some sense to me. The combined S3 has a 95% confidence interval from 0.82% to 1.87%.
Stat Monitor Repairman
How long do you want to ignore this user?
marloag said:

Making the mortality rate somewhere close to that of the flu. Interesting
84AGEC
How long do you want to ignore this user?
AG
If they had it before Jan 20

Their antibodies are lying
Windy City Ag
How long do you want to ignore this user?
AG
Quote:

I say skeptic because three of them wrote editorials saying essentially the disease was being overblown, and offered some crude estimates for severity that could imply the severity was much worse than the 3.4% number. Bendavid and Bhattacharya wrote one in the WSJ, Ionnadis wrote one for stat magazine. It doesn't mean they're wrong, it means you know what they were expecting to find (high prevalence -> low severity) and should consider that when reading the study. Don't misconstrue what I'm saying. I don't think they did anything wrong, but it is a truism that people generally find the result they're looking for. That's why we do meta-analysis and replication.

I guess my problem is that they wrote those articles about a specific Oxford model . . . . .to say they are "skeptics" is to say that the Oxford model is the accepted standard which is a big stretch. You say "crude estimates" assuming other estimates or more precise. No one knows anything . . . . for you to doubt their work so consistently means you have your own belief. So if this team is trying to validate its pre-existing beliefs, you may be as well.
Zobel
How long do you want to ignore this user?
AG
No, they wrote them generally against the WHO estimate of 3.4% IFR. The imperial study used an IFR of 0.66%.

Oxford released a paper showing a model that supposed a very high infectious rate, and a correspondingly lower IFR.

My point is these guys have expressed a strong position of doubt, up to and including writing op-Ed's in major papers. That's significant to me. These are not disinterested observers at this point.

Me? I got nothing riding on this. The latest study I've seen estimated 2.7% current infections in the US. I think that makes sense, and I don't have any problem with this study. It's good, we need more studies like this.
Ranger222
How long do you want to ignore this user?
AG
There is a UK modeling that expects less than 4% of the population to have been exposed, and I think it was a Dutch study that is looking at serology of samples from blood banks and they are finding close to 4% too, but it's interim analysis and they haven't processed all the samples yet

DadHammer
How long do you want to ignore this user?
AG
Wrong
AnScAggie
How long do you want to ignore this user?
AG
PJYoung said:

Um...

"Dr. John Brownstein, an epidemiologist at Boston Children's Hospital and an ABC News contributor, cautioned that the results for the California county are not necessarily representative of the U.S. population and noted the use of online ads to find participants could skew the candidate pool."


Really?? How is this any different than the flyers posted around campus and classified ads posted when I was in school? Certain people sure like to shoot down any good news.
NASAg03
How long do you want to ignore this user?
k2aggie07 said:

No, they wrote them generally against the WHO estimate of 3.4% IFR. The imperial study used an IFR of 0.66%.

Oxford released a paper showing a model that supposed a very high infectious rate, and a correspondingly lower IFR.

My point is these guys have expressed a strong position of doubt, up to and including writing op-Ed's in major papers. That's significant to me. These are not disinterested observers at this point.

Me? I got nothing riding on this. The latest study I've seen estimated 2.7% current infections in the US. I think that makes sense, and I don't have any problem with this study. It's good, we need more studies like this.


Their extensive years of experience, research and publishing hundreds of peer reviewed papers naturally gives them a hunch if something seems off. It's called a hypothesis, which is not the same as bias.

And they published op-eds to quickly make change for something that seemed like it could have fast negative implications if we didn't change course. It also generated revenue to fund studies.
Mike Shaw - Class of '03
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.