Search all of the Society for Participatory Medicine website:Search

Cross-posted from my own blog, with a late p.s. from this morning’s paper

When John Grohol read my post the other day about evidence-based medicine, he steered me to a paper worth reading: Helping Doctors and Patients Make Sense of Health Statistics. (Update Dec 15 2010: that link is broken; this link works.)

This is relevant to the e-patient movement because as you and I become more responsible for our own healthcare, we need to be clearer about what we’re reading. Plus, it appears we could be more vigilant about what our own professional policymakers – and even our MDs – are thinking.

The paper is 44 pages, but even the first few will open your eyes to how statistically illiterate most of us (and them) are. Consider this question, which was given to 160 gynecologists:

Assume the following information about the women in a region:

  • The probability that a woman has breast cancer is 1%
  • If a woman has breast cancer, the probability that she tests positive is 90%
  • If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% (false-positive rate)

A woman tests positive. She wants to know whether that means that she has breast cancer for sure, or what the chances are. What is the best answer?

  1. The probability that she has breast cancer is about 81%.
  2. Out of 10 women with a positive mammogram, about 9 have breast cancer.
  3. Out of 10 women with a positive mammogram, about 1 has breast cancer.
  4. The probability that she has breast cancer is about 1%.

21% of them got the right answer (#3, 1 chance in 10). 60% guessed way too high, the other 19% guessed #4. (That’s 10 times too low).

The paper presents numerous other examples of statistical illiteracy (an example of “innumeracy”), misunderstandings of data that lead to serious unintended policy consequences. My personal favorite is the opening item about Rudy Giuliani’s assertion that he’s lucky to have gotten prostate cancer here instead of under the UK’s “socialized” medical system. It’s not because I don’t like Giuliani – it’s that his own misunderstanding of the data he was quoting led him to advocate something that had nothing to do with his actual odds. He himself would have been harmed if he’d been guided by his own best advice. And he’s not alone in that.

The paper proposes uncomplicated ways to improve our comprehension. First among them is to stop talking in percentages and talk instead in raw numbers. Phrased that way, the same three facts that were given to the gynecologists are much clearer:

  • Ten out of every 1,000 women have breast cancer
  • Of these ten women with breast cancer, 9 test positive
  • Of the 990 without breast cancer, 89 nevertheless test positive.

With this view, 87% got it right. (Of the 98 women who tested positive, only 9 actually have cancer: about 1 in 10.)

Another example echoed what The End of Medicine said about Lipitor. (Without Lipitor, 1.5% of the control group had a coronary event; with Lipitor, about 1% still had one.) A 1995 alert in the UK warned that certain oral contraceptives doubled the risk of blood clots in the lung or leg. Understandably, many women stopped taking the pill; within three years, 13,000 more abortions were performed, reversing five years of decline, and there was a matching increase in live births.

What was the risk that led to this? In raw numbers, one woman in 7,000 has such a blood clot anyway; with this pill, one more blood clot happened.

The irony in this case is that both abortion and childbirth carry more risk of clots than the pill itself. In other words, one benefit of the pill is that it avoids the risk of clots associated with the end of any pregnancy.

So although the number presented (“double the risk”) was absolutely accurate, the real clinical impact wasn’t nearly as absolute.

This is a taste of what’s in the first few pages. It gets dry in places but even the first few pages are compelling and informative – and at no point does it require that you be a mathematician. The explanation of Giuliani’s error is particularly good.


p.s. A perfect example just came in, just before the scheduled release of this post: Today’s NY Times discusses a “large new study” of Crestor, a statin, involving 17,800 patients. It reports apparently dramatic benefits – 54% fewer heart attacks, etc. And it correctly, imo, asks “Who should take statins?”

But these “relative risk reduction” numbers (percent reduction) are exactly what Making Sense warns against: what are the raw numbers?

This is not to say we shouldn’t use statins. The whole point is that the Times piece doesn’t give us enough information to know.

And Making Sense argues that without such information, the whole concept of informed consent is a fiction. Think about that one for a bit.

Thanks to the good Doctor John for the link.

 

Please consider supporting the Society by joining us today! Thank you.

Donate