NY Times: “Do clinical trials work?”

Dice image from The Times by Ruth Gwily dice 7-14-13

By Ruth Gwily for The Times

Just a quick note on something I’m happy to say we’ve been hollering here for years: A lot of what passes for “evidence” from peer reviewed medical journals is scientifically weak, and has never been verified by an independent lab.

That means to be scientific, e-patients and their physicians must be cautious about interpreting any published result: if the study wasn’t large, it might be published even if it’s no better than a roll of the dice.

Yes, roll of the dice: that’s the illustration on a piece in Sunday’s NY Times by veteran journalist* Clifton Leaf: Do Clinical Trials Work?
____________

Sample quote:

That we could be this uncertain about any medicine with $6 billion in annual global sales — and after 16 years of human trials involving tens of thousands of patients — is remarkable in itself. And yet this is the norm, not the exception.

Got that? The norm.

If it sounds familiar, you might be a regular reader here. In our opening issue of our Journal of Participatory Medicine, Richard Smith wrote

After 30 years of practicing peer review and 15 years of studying it experimentally, I’m unconvinced of its value. … evidence on the upside of peer review is sparse, while evidence on the downside is abundant. We struggle to find convincing evidence of its benefit, but we know that it is slow, expensive, largely a lottery, poor at detecting error, ineffective at diagnosing fraud, biased, and prone to abuse. …

And my personal favorite:

[M]ost of what appears in peer-reviewed journals is scientifically weak.

Yo: That was four years ago. “Scientifically weak.” “A lottery.” And now, in Sunday’s Times piece:

The researcher said that when he and his team designed the Phase 3 trial, he thought the drug would probably fail. But if they could get an approval for a drug for Alzheimer’s disease, it would be “a huge success.”

“What he was saying,” marvels Dr. Berry, “was, ‘We’re playing the lottery.’ ” [Emphasis added]

Pay attention here. Who was that un-medical hippy Richard Smith, writing in the inaugural issue of our journal? Smith was editor of the British Medical Journal [BMJ] for 25 years.

And then there’s this, from a post here the same month:

It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine. [NEJM]

That one’s from Marcia Angell, MD. (See also Smith’s blog post last week on the BMJ blog, on why the NEJM refusing to publish letters that destroy articles they’ve published.)

The e-patient take-aways:

Participatory medicine is about patients shifting from being mere passengers to being responsible drivers. To be responsible, you need reliable information, and you need to check it. So:

  • It’s valid to ask questions about the evidence. (In fact, as any high school student learns, it’s unscientific not to examine the methods and data.) 
  • Read this blog and journal. Here we are four years later, and both the NY Times and the giant ASCO conference (cited in that article) are saying what we said here four years ago.

Note that this is not just a patient issue – how are doctors supposed to perform to their potential, if the evidence they’re served might be crap?
___________

* From his author page: “Clifton Leaf is a guest editor for The New York Times op-ed page and Sunday Review. Previously, he was executive editor at both The Wall Street Journal’s SmartMoney magazine and Fortune. A winner of the Gerald Loeb Award for Distinguished Business and Financial Journalism and a two-time finalist for the National Magazine Award, Cliff has received several leadership honors for his efforts in the cancer fight.”

Print

Posted in: hc's problem list | research issues

 

 

Comments

Leave a Reply