Retracted Study re: HCQ

3,856 Views | 21 Replies | Last: 5 yr ago by DTP02
End Of Message
How long do you want to ignore this user?
AG

Resistance to tyranny is obedience to God.
Learned2Code
How long do you want to ignore this user?
So the peer review system worked. Good. Best to not jump to conclusions until full publication.
Cancelled
How long do you want to ignore this user?
Hahaha
samurai_science
How long do you want to ignore this user?
Well we have research from as early as 2003 showing it works for Covid like SARS viruses.

All of these doctors in China, South Korea, Japan and Singapore started using it in early on for a reason.

The doctors just didn't magically decide to try it out of the blue
J. Walter Weatherman
How long do you want to ignore this user?
Unfortunately this doesn't really work in the desperate click-chasing media world we live in.
One Eyed Reveille
How long do you want to ignore this user?
AG
This is the article that caused the WHO and others to STOP testing HCQ because it said it increased the death rate right?
FlyRod
How long do you want to ignore this user?
This was exactly a peer review failure (speaking as someone who knows a tiresome amount about this process).

Please don't interpret the redaction as "hey they drug is ok after all!" It very well might be. But in this case, this article was rushed to publication without thorough vetting.

I am genuinely concerned this fiasco will damage the fight against COVID; people will (rightly) mistrust the scientific peer review process, and it will green light a lot of snake oil (and I'm not referring to the drug in question).
Silky Johnston
How long do you want to ignore this user?
Why is The Lancet still considered a reputable publication after publishing Andrew Wakefield's lies about MMR vaccines and autism?
DadHammer
How long do you want to ignore this user?
AG
Because they put out studies certain people want put out. The truth doesn't matter.
DTP02
How long do you want to ignore this user?
AG
If you look into the background of the company that supplied the critical data it's so obviously fraudulent that it never should have gotten anywhere close to publication.

This study was pure make believe.

That a reputable medical journal published it, and so many policy-makers relied on it, speaks volumes.

If this doesn't cause people in the scientific community to take a hard look at themselves I don't what will.
End Of Message
How long do you want to ignore this user?
AG
DTP02 said:

If you look into the background of the company that supplied the critical data it's so obviously fraudulent that it never should have gotten anywhere close to publication.

This study was pure make believe.

That a reputable medical journal published it, and so many policy-makers relied on it, speaks volumes.

If this doesn't cause people in the scientific community to take a hard look at themselves I don't what will.
Resistance to tyranny is obedience to God.
amercer
How long do you want to ignore this user?
AG
This is really unfortunate considering interest in these studies. I was complaining on another thread about the rise of preprint servers, and media (or Twitter) immediately reporting on non reviewed work. This is a more old fashioned failure of the system, but another reminder that good science takes time.

Decent studies are trickling out, and full clinical trials will read out at some point. It just moves on a totally different timescale than a pandemic response.
(Removed:110240)
How long do you want to ignore this user?
AG
The real question now that we have a clear example is how many other flawed papers have slipped through the cracks in the rush to publish everything related to COVID-19?
Windy City Ag
How long do you want to ignore this user?
AG
Looks like Oxford researchers came out just now and poured more ice water on the efficacy.

http://www.ox.ac.uk/news/2020-06-05-no-clinical-benefit-use-hydroxychloroquine-hospitalised-patients-covid-19

To the original point though, I do think Academic Journals in most disciplines are under serious fire for lazy vetting. It is a shame because so many of these periodicals have real influence on policy making as evidenced by state governments and international agencies relying on this work.

The extremely tough thing about the current debate is the counter-factuals involve life and death. If the research is wrong you might have blocked life saving treatment. If the research is correct then you have pushed magic beans and blocked other lines of treatment or in some historical cases backed harmful treatments.

I know in my professional (Quantitative Finance and Economics), Federal Reserve Researchers put put out a pretty sad research piece showing that even when they were supplied with underlying data and code for research pieces, they couldn't replicate the authors findings.

Quote:

"We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files.

Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files.

We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.

That doesn't stop a lot of the work from influencing Central Bank decision makers around the world. The Reinhart Rogoff error is probably one of the most famous.

https://www.newyorker.com/news/john-cassidy/the-reinhart-and-rogoff-controversy-a-summing-up
DTP02
How long do you want to ignore this user?
AG
Windy City Ag said:

Looks like Oxford researchers came out just now and poured more ice water on the efficacy.

http://www.ox.ac.uk/news/2020-06-05-no-clinical-benefit-use-hydroxychloroquine-hospitalised-patients-covid-19

To the original point though, I do think Academic Journals in most disciplines are under serious fire for lazy vetting. It is a shame because so many of these periodicals have real influence on policy making as evidenced by state governments and international agencies relying on this work.

The extremely tough thing about the current debate is the counter-factuals involve life and death. If the research is wrong you might have blocked life saving treatment. If the research is correct then you have pushed magic beans and blocked other lines of treatment or in some historical cases backed harmful treatments.

I know in my professional (Quantitative Finance and Economics), Federal Reserve Researchers put put out a pretty sad research piece showing that even when they were supplied with underlying data and code for research pieces, they couldn't replicate the authors findings.

Quote:

"We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files.

Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files.

We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.

That doesn't stop a lot of the work from influencing Central Bank decision makers around the world. The Reinhart Rogoff error is probably one of the most famous.

https://www.newyorker.com/news/john-cassidy/the-reinhart-and-rogoff-controversy-a-summing-up


Regarding the HCQ study, I think the jury has pretty much already been in on the efficacy of HCQ on people who are already severely effected. Even the anecdotal evidence for that type of usage is not positive.

The hopes from the beginning for HCQ have been more for effectiveness as a prophylactic or early stage treatment. The Oxford study, since it only concerned hospitalized patients, sheds no new light on that usage.

Regarding the failure a of replicability in published experiments is something that has been observed in other fields as well. There was a study several years ago that found a similar lack of replicability.

IMO it's largely due to systemic academic failures and to some degree politicization.
Windy City Ag
How long do you want to ignore this user?
AG
Quote:

IMO it's largely due to systemic academic failures and to some degree politicization.

Yeah, the PhD programs are largely to blame IMO. Tenure drives a publish or perish mentality and professors become desperate to get something/anything out there.

In my field, it took decades to build durable datasets. The University of Chicago's CRISP program has undermined some of the most sacred cows like the Small Cap effect and most recently the Value effect. It was Chicago PhDs that came up with all that stuff, so kudos to them (I suppose) for invalidating their own findings. Still, it is amazing how many strongly held academic views rest of poor quality data and pretty wobbly statistical significance.

plain_o_llama
How long do you want to ignore this user?
Unfortunately, pre-print servers end up being the Twitter of Science. Immediate, knee jerk, and prone to politizaton.

Even those that know better can't resist. Remember this blast from the recent past:

"A Prophet of Scientific Rigor - and a Covid Contrarian"
John Ioannidis laid bare the foibles of medical science. Now medical science is returning the favor.

https://www.wired.com/story/prophet-of-scientific-rigor-and-a-covid-contrarian/

Ioannidis has been a fixture in medical-school curricula for years, achieving something akin to hero status. He's one of the most-cited scientists of any type in the world, and may be peerless on this metric among physicians. Amazingly, he's earned all this acclaim by dedicating his career to telling the fields of biomedicine (and others, too) how shoddy they are, and how little trust one should have in their published research.

But now the scientist celebrated for showing colleagues how their studies are screwed up has a new claim to fame. Its very different vibe is reflected in the faces of the medical students I'm addressing. Almost literally overnight Ioannidis has himself become a case study in how to screw up a medical study. And not just any study: This one concludes that Covid-19 isn't all that dangerous; that the current lockdowns to prevent its spread are a bigger threat to public health than the actual disease. In other words, Ioannidis' views on the pandemic sound closer to those of the governor of Georgia than to Anthony Fauci's.


Ranger222
How long do you want to ignore this user?
AG
Windy City Ag said:

Quote:

IMO it's largely due to systemic academic failures and to some degree politicization.

Yeah, the PhD programs are largely to blame IMO. Tenure drives a publish or perish mentality and professors become desperate to get something/anything out there.

In my field, it took decades to build durable datasets. The University of Chicago's CRISP program has undermined some of the most sacred cows like the Small Cap effect and most recently the Value effect. It was Chicago PhDs that came up with all that stuff, so kudos to them (I suppose) for invalidating their own findings. Still, it is amazing how many strongly held academic views rest of poor quality data and pretty wobbly statistical significance.




The entire academic system is broken and many have called for reform for a long time but old habits die hard. There are a lot of positive developments such as open, raw data being available to everyone that is allowing more transparency and more serious checks on data manipulation as technology develops and improves.

But it's still a manpower and hours issue when we are talking about peer review. Part of your "duty" is to review others work then they are submitted to journals. I put together a paper, submit it to a journal, an editor gets selected and they are tasked with finding hopefully non-biased reviewers who are experts in the field. They review, make comments and recommended to accept the paper or reject it. It is almost always rejected with a chance at resubmission if we can address the reviewer comments. Some reviewers are good, others are bad. Here's the thing: you don't get paid to review these manuscripts. It's purely a courtesy. If you're not paid to do it, how much effort do you think gets put in? I reviewed two manuscripts last week, and both took around 8 hours of my time (~16 hours total) that I didn't get paid for and honestly needed more time with them before I sent in the reviews. I am only given so long with them. before I have to turn in the reviewer comments. Not like I can take my sweet time as the authors want their comments back so they can start working on their revision (process usually takes 3 weeks from first paper submission to receiving back comments and the top journals want quick turnaround so they can advertise that). Those 16 hours were my "free time" as well, not while I was doing my regular job. See the issues now? While peer review is designed to catch a lot of flaws, the system is no where near perfect and there will be even more pressure to get a COVID and HCQ manuscript reviewed quickly. If someone says "we got this dataset" I'm probably not checking the source of a dataset especially if it's from multiple places. I'm focused more on how they analyzed the data, did the statistics and do they reach valid conclusions based on what the data told them and are they able to convey that effectively in their writing or are they exaggerating to make a headline. You might say "how can you not check the data source?" Again, I'm doing this on my own time for free and only have a couple days to do this. I'm probably relying on someone else to do it and catch it.

The political thing in academia gets played up too much. There have been retractions on both sides of HCQ being effective and not effective now. I'm not saying it never happens, but I have personally never had it been an issue or factor. There is no high order to only think one way or have one viewpoint. Nobody asked what my political thoughts or leanings where when I accepted a faculty position a few months ago. There is no "liberal agenda" to "brainwash" to get data and research to show only one desired outcome at American universities. It's a tired narrative.
DTP02
How long do you want to ignore this user?
AG
plain_o_llama said:

Unfortunately, pre-print servers end up being the Twitter of Science. Immediate, knee jerk, and prone to politizaton.

Even those that know better can't resist. Remember this blast from the recent past:

"A Prophet of Scientific Rigor - and a Covid Contrarian"
John Ioannidis laid bare the foibles of medical science. Now medical science is returning the favor.

https://www.wired.com/story/prophet-of-scientific-rigor-and-a-covid-contrarian/

Ioannidis has been a fixture in medical-school curricula for years, achieving something akin to hero status. He's one of the most-cited scientists of any type in the world, and may be peerless on this metric among physicians. Amazingly, he's earned all this acclaim by dedicating his career to telling the fields of biomedicine (and others, too) how shoddy they are, and how little trust one should have in their published research.

But now the scientist celebrated for showing colleagues how their studies are screwed up has a new claim to fame. Its very different vibe is reflected in the faces of the medical students I'm addressing. Almost literally overnight Ioannidis has himself become a case study in how to screw up a medical study. And not just any study: This one concludes that Covid-19 isn't all that dangerous; that the current lockdowns to prevent its spread are a bigger threat to public health than the actual disease. In other words, Ioannidis' views on the pandemic sound closer to those of the governor of Georgia than to Anthony Fauci's.





I'm not sure what your point is in linking that excerpt from May 1, which sought to pillory a guy who it looks like had it closer to right than the vast majority did at the time.
Dr. Not Yet Dr. Ag
How long do you want to ignore this user?
DTP02 said:

Windy City Ag said:

Looks like Oxford researchers came out just now and poured more ice water on the efficacy.

http://www.ox.ac.uk/news/2020-06-05-no-clinical-benefit-use-hydroxychloroquine-hospitalised-patients-covid-19

To the original point though, I do think Academic Journals in most disciplines are under serious fire for lazy vetting. It is a shame because so many of these periodicals have real influence on policy making as evidenced by state governments and international agencies relying on this work.

The extremely tough thing about the current debate is the counter-factuals involve life and death. If the research is wrong you might have blocked life saving treatment. If the research is correct then you have pushed magic beans and blocked other lines of treatment or in some historical cases backed harmful treatments.

I know in my professional (Quantitative Finance and Economics), Federal Reserve Researchers put put out a pretty sad research piece showing that even when they were supplied with underlying data and code for research pieces, they couldn't replicate the authors findings.

Quote:

"We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files.

Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files.

We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.

That doesn't stop a lot of the work from influencing Central Bank decision makers around the world. The Reinhart Rogoff error is probably one of the most famous.

https://www.newyorker.com/news/john-cassidy/the-reinhart-and-rogoff-controversy-a-summing-up


Regarding the HCQ study, I think the jury has pretty much already been in on the efficacy of HCQ on people who are already severely effected. Even the anecdotal evidence for that type of usage is not positive.

The hopes from the beginning for HCQ have been more for effectiveness as a prophylactic or early stage treatment. The Oxford study, since it only concerned hospitalized patients, sheds no new light on that usage.

Regarding the failure a of replicability in published experiments is something that has been observed in other fields as well. There was a study several years ago that found a similar lack of replicability.

IMO it's largely due to systemic academic failures and to some degree politicization.
NEJM published their HCQ prophylaxis study this past week. It doesn't work.
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
BiochemAg97
How long do you want to ignore this user?
AG
FlyRod said:

This was exactly a peer review failure (speaking as someone who knows a tiresome amount about this process).

Please don't interpret the redaction as "hey they drug is ok after all!" It very well might be. But in this case, this article was rushed to publication without thorough vetting.

I am genuinely concerned this fiasco will damage the fight against COVID; people will (rightly) mistrust the scientific peer review process, and it will green light a lot of snake oil (and I'm not referring to the drug in question).
It isn't so much as a failure of the peer review process as it is a consequence of failing to peer review. So much of the COVID research has been rushed to publication in preprint form before peer review.
plain_o_llama
How long do you want to ignore this user?
You caught the irony of the situation. Ioannidis may end up being broadly correct in his view that the dangers of Covid-19 were being exaggerated and politicized. Yet, in the face of politics he pushed forward and seemingly politicized the results of a serology based study. Specifically, with a report that has a lot of problems of the kind he made his reputation criticizing.

Perhaps there is additional irony in the tone of this article and the suggestion Ioannidis will now be reviled because of his political position and not for overstating some study results. If you are a Junior Researcher, what lesson is to be learned? Is it that if your science touches the poltical battlefield, it will be judged through both frames?
DTP02
How long do you want to ignore this user?
AG
Dr. Not Yet Dr. Ag said:

DTP02 said:

Windy City Ag said:

Looks like Oxford researchers came out just now and poured more ice water on the efficacy.

http://www.ox.ac.uk/news/2020-06-05-no-clinical-benefit-use-hydroxychloroquine-hospitalised-patients-covid-19

To the original point though, I do think Academic Journals in most disciplines are under serious fire for lazy vetting. It is a shame because so many of these periodicals have real influence on policy making as evidenced by state governments and international agencies relying on this work.

The extremely tough thing about the current debate is the counter-factuals involve life and death. If the research is wrong you might have blocked life saving treatment. If the research is correct then you have pushed magic beans and blocked other lines of treatment or in some historical cases backed harmful treatments.

I know in my professional (Quantitative Finance and Economics), Federal Reserve Researchers put put out a pretty sad research piece showing that even when they were supplied with underlying data and code for research pieces, they couldn't replicate the authors findings.

Quote:

"We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files.

Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files.

We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.

That doesn't stop a lot of the work from influencing Central Bank decision makers around the world. The Reinhart Rogoff error is probably one of the most famous.

https://www.newyorker.com/news/john-cassidy/the-reinhart-and-rogoff-controversy-a-summing-up


Regarding the HCQ study, I think the jury has pretty much already been in on the efficacy of HCQ on people who are already severely effected. Even the anecdotal evidence for that type of usage is not positive.

The hopes from the beginning for HCQ have been more for effectiveness as a prophylactic or early stage treatment. The Oxford study, since it only concerned hospitalized patients, sheds no new light on that usage.

Regarding the failure a of replicability in published experiments is something that has been observed in other fields as well. There was a study several years ago that found a similar lack of replicability.

IMO it's largely due to systemic academic failures and to some degree politicization.
NEJM published their HCQ prophylaxis study this past week. It doesn't work.


Thanks, I had missed it until now. Definitely not a promising study for remaining hopes for HCQ efficacy.
Refresh
Page 1 of 1
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.