I don't understand what you're asking.Quote:
And I am very interested into why when the virus got here is such a big deal?
I don't understand what you're asking.Quote:
And I am very interested into why when the virus got here is such a big deal?
So why is it a big deal if I think the virus was here earlier than some people? It is almost like there is a template.TXAggie2011 said:I don't understand what you're asking.Quote:
And I am very interested into why when the virus got here is such a big deal?
You're asking why people are debating with you? Why do you keep debating with them?dermdoc said:So why is it a big deal if I think the virus was here earlier than some people? It is almost like there is a template.TXAggie2011 said:I don't understand what you're asking.Quote:
And I am very interested into why when the virus got here is such a big deal?
Nobody knows. So I can not. And neither can anybody else.TXAggie2011 said:You're asking why people are debating with you? Why do you keep debating with them?dermdoc said:So why is it a big deal if I think the virus was here earlier than some people? It is almost like there is a template.TXAggie2011 said:I don't understand what you're asking.Quote:
And I am very interested into why when the virus got here is such a big deal?
If you believe you have a strong case, make the case.
oragator said:
By the way, on the Roosevelt they tested nearly everyone. 60 percent were symptom free. And that doesn't account for how many showed symptoms later. It is also among a population that was younger and healthier, less likely to show symptoms probably. But even taken at face value, the death rate would still be north of 1 percent, or ten times the flu, and that doesn't even take into account the higher transmission rate.
https://www.reuters.com/article/us-health-coronavirus-usa-military-sympt-idUSKCN21Y2GB
Quote:
One is that this study was authored by skeptics
Who was involved in this study?k2aggie07 said:
So a couple of things. One is that this study was authored by skeptics - three of them (at least) have penned high profile op-eds. Doesn't mean their findings aren't valid, but it does mean we should expect them to be more likely to find what they're looking for (this is a well-observed phenomenon - nice article talking about this).
The other is that the sensitivity of the test is pretty bad. The manufacturer said 91.8% which is bad enough but their locally tested data was 67.6%! Specificity is good, 99.5%.
Sensitivity is the probability that a positive test result means a true positive.
Specificity is the probability that a negative test means a true negative.
The validity of their tests is extremely dependent on the actual (unknown) prevalence in the population. For example (using the manufacturer's numbers - the most generous / accurate assumption)
Let's say the true prevalence is 10%.
For any particular test result you'd expect 9.6% positive and 90.4% negative.
For any particular positive result, it has a 95% chance of being true.
For any particular negative result, it has a 99% chance of being true.
But... what the disease prevalence is 1%?
For any particular test result you'd expect 1.4% positive and 98.6% negative.
For any particular positive result, it has a 65% chance of being true!
For any particular negative result, it has a 99% chance of being true.
Same numbers... but with the 67.6% sensitivity
10%
For any particular test result you'd expect 7.2% positive and 92.8% negative.
For any particular positive result, it has a 93.6% chance of being true.
For any particular negative result, it has a 96.5% chance of being true.
1%
For any particular test result you'd expect 1.1% positive and 98.9% negative.
For any particular positive result, it has a 57.7% chance of being true.
For any particular negative result, it has a 99.6% chance of being true.
They got 50 positive cases out of 3,300 tests, which is a crude prevalence of 1.5%.
oragator said:
I am saying that the percent of asymptomatic cases from real world examples like this can be used to inform on much larger groups, with the knowledge that it's one sample set amongst a relatively small population. But there was another study in Italy that came up with similar numbers. And neither study accounted for those that developed symptoms later.
The Roosevelt crew will have a far lower rate of death than the general population - not only are they young, which already has a low death rate, they are far less likely to be obese or have other underlying conditions, or they wouldn't have made it in to the service or through the first few months.
But a nearly wholly young relatively healthy group like this are probably the best case as far as asymptomatics go, and it's still nowhere near what was in the Stanford study, but even if we are overestimating the death rate by half, it's still a very scary number.
Honestly as a scientist being a skeptic right now is where the reward is. It's a contrary voice in a sea of panic so it will get lots of attention, and we will likely never get to an infection rate high enough to definitively prove them wrong. More power to them, it's not going to change much either way anyway.
k2aggie07 said:
So a couple of things. One is that this study was authored by skeptics - three of them (at least) have penned high profile op-eds. Doesn't mean their findings aren't valid, but it does mean we should expect them to be more likely to find what they're looking for (this is a well-observed phenomenon - nice article talking about this).
The other is that the sensitivity of the test is pretty bad. The manufacturer said 91.8% which is bad enough but their locally tested data was 67.6%! Specificity is good, 99.5%.
Sensitivity is the probability that a positive test result means a true positive.
Specificity is the probability that a negative test means a true negative.
The validity of their tests is extremely dependent on the actual (unknown) prevalence in the population. For example (using the manufacturer's numbers - the most generous / accurate assumption)
Let's say the true prevalence is 10%.
For any particular test result you'd expect 9.6% positive and 90.4% negative.
For any particular positive result, it has a 95% chance of being true.
For any particular negative result, it has a 99% chance of being true.
But... what the disease prevalence is 1%?
For any particular test result you'd expect 1.4% positive and 98.6% negative.
For any particular positive result, it has a 65% chance of being true!
For any particular negative result, it has a 99% chance of being true.
Same numbers... but with the 67.6% sensitivity
10%
For any particular test result you'd expect 7.2% positive and 92.8% negative.
For any particular positive result, it has a 93.6% chance of being true.
For any particular negative result, it has a 96.5% chance of being true.
1%
For any particular test result you'd expect 1.1% positive and 98.9% negative.
For any particular positive result, it has a 57.7% chance of being true.
For any particular negative result, it has a 99.6% chance of being true.
They got 50 positive cases out of 3,300 tests, which is a crude prevalence of 1.5%.
Quote:
I felt like I was reading a repudiation of this study, but then the takeaway at the end is that the results might show more like 30-48 times the prevalence than thought? That would still be massive news.
oragator said:
I'm not talking about the death rate on the Roosevelt, only the asymptomatic rate.
The post that I was responding to was saying that this Stanford study meant that the real death rate was potentially very small compared to current estimates because 50 times more people had the virus than we think, I was arguing otherwise. If we are correctly capturing roughly half of the actual cases eventually in our daily numbers as the Ship info suggests (under the assumption that most symptomatic people will likely eventually get tested), our death expected rate should only be off at most by half. And half of two or three percent is still a lot of people. It also means we are light years from herd immunity, or any of the other reasons to let our guard down.
That's all.
Where did I say anything about my bias? I showed theirs, but I never mentioned anything about what I think the prevalence is.Quote:
Who was involved in this study?
They have earned the right to question the current narrative and seek out answers to a very complex problem.
But their doubts about the panic surrounding this pandemic don't fit YOUR bias, so of course you don't believe them.
Tell me again what your credentials are, and why your data reduction and conclusions are more worthwhile than theirs?
PJYoung said:
That's the former head of the FDA.
ETFan said:marloag said:
Making the mortality rate somewhere close to that of the flu. Interesting
We can look to NY to see this clearing isn't like the flu.
marloag said:
Making the mortality rate somewhere close to that of the flu. Interesting
Quote:
I say skeptic because three of them wrote editorials saying essentially the disease was being overblown, and offered some crude estimates for severity that could imply the severity was much worse than the 3.4% number. Bendavid and Bhattacharya wrote one in the WSJ, Ionnadis wrote one for stat magazine. It doesn't mean they're wrong, it means you know what they were expecting to find (high prevalence -> low severity) and should consider that when reading the study. Don't misconstrue what I'm saying. I don't think they did anything wrong, but it is a truism that people generally find the result they're looking for. That's why we do meta-analysis and replication.
PJYoung said:
Um...
"Dr. John Brownstein, an epidemiologist at Boston Children's Hospital and an ABC News contributor, cautioned that the results for the California county are not necessarily representative of the U.S. population and noted the use of online ads to find participants could skew the candidate pool."
k2aggie07 said:
No, they wrote them generally against the WHO estimate of 3.4% IFR. The imperial study used an IFR of 0.66%.
Oxford released a paper showing a model that supposed a very high infectious rate, and a correspondingly lower IFR.
My point is these guys have expressed a strong position of doubt, up to and including writing op-Ed's in major papers. That's significant to me. These are not disinterested observers at this point.
Me? I got nothing riding on this. The latest study I've seen estimated 2.7% current infections in the US. I think that makes sense, and I don't have any problem with this study. It's good, we need more studies like this.