A Troubled Trifecta: Peer Review, Academia & Tenure

We welcome Peter Frishauf as an author on our blog. Peter is on the Editorial Board [brief bio] of our Society’s Journal of Participatory Medicine, and as described below, has already authored some important material on this subject. His first post here is triggered by an article in Tuesday’s New York Times that generated much discussion on a vital subject: how we reality-check what we think we know, as a basis for science, and especially for medical advice. – e-Patient Dave

It’s a recitation of the obvious: without reliable, high-quality peer-review, medical information and patient care suffer.

Sadly, academia and broken peer review are also linked to our tenure system. We need to fix that.

I hate to condemn with a broad brush: many academics care deeply about better patient care and believe – genuinely – that tenure is a good thing. As Wikipedia describes it today, tenure “protects teachers and researchers when they dissent from prevailing opinion, openly disagree with authorities of any sort, or spend time on unfashionable topics.” But sadly, the vast majority of academics acknowledge that an even more powerful benefit of tenure is pretty damn good job security. Tenure has much more to do with money (lifetime job security) than academic freedom.*

Tenure and traditional peer review are joined in a pernicious union that should be smashed if universities are to be trusted to promote the public interest through good, published science and medicine. In the “publish or perish” environment of academic medicine, candidates for tenure are generally ranked not on how much information they create (by research or invention) or how well they teach (which counts for just about nothing), but by the prestige of the journal where they publish. It’s a system that might have made sense when a journal’s reputation was regarded as a proxy for an author’s scientific rigor and integrity. No more.

With the Internet, information is distributed efficiently and effortlessly. Reputations are created – and destroyed – quickly. Crowdsourcing experts review articles to improve their quality. Articles themselves aren’t published and forgotten, but continually curated over time. The old publishing and peer review model is obsolete in this environment.

In an excellent article this week in The New York Times, journalist Patricia Cohen describes a number of experiments to improve peer review:

  • Katherine Rowe, a Renaissance specialist and media historian at Bryn Mawr College, worked with the prestigious 60-year-old Shakespeare Quarterly to crowdsource/review articles by a panel of invited experts. The reviewers “were invited to post their signed comments on the Web site MediaCommons, a scholarly digital network.” Others could add their thoughts as well, after registering with their own names. “In the end 41 people made more than 350 comments, many of which elicited responses from the authors. The revised essays were then reviewed by the quarterly’s editors, who made the final decision to include them in the printed journal, due out Sept. 17.”
  • “Today a small vanguard of digitally adept scholars is rethinking how knowledge is understood and judged by inviting online readers to comment on books in progress, compiling journals from blog posts and sometimes successfully petitioning their universities to grant promotions and tenure on the basis of non-peer-reviewed projects.”
  • “Clubby exclusiveness, sloppy editing and fraud have all marred peer review on occasion. Anonymity can help prevent personal bias, but it can also make reviewers less accountable; exclusiveness can help ensure quality control but can also narrow the range of feedback and participants. Open review more closely resembles Wikipedia behind the scenes, where anyone with an interest can post a comment. This open-door policy has made Wikipedia, on balance, a crucial reference resource.”

Journalist Cohen also nails the tragic and under-reported association between traditional peer review and tenure:

  • “The most daunting obstacle to opening up the process is that peer-review publishing is the path to a job and tenure, and no would-be professor wants to be the academic canary in the coal mine. The first question that Alan Galey, a junior faculty member at the University of Toronto, asked when deciding to participate in The Shakespeare Quarterly’s experiment was whether his essay would ultimately count toward tenure. “I went straight to the dean with it,” Mr. Galey said. (It would.)

Well, good for the University of Toronto: Mr. Galey can get “tenure credit” and participate in an experiment to create high quality information. What a concept for a university!

Medicine isn’t a total laggard when innovating in this space. PLOS, several journals within the BioMedCentral group, and others, are experimenting with open, pre and post publication peer review. A growing group of medical publications – but still a tiny percentage – embrace open access, which makes articles freely available. U.S. federal law that mandates open access for research published with public grant money is speeding the trend exponentially.

Still, when you consider what’s at stake – healthcare, and at times a life-saving treatment – we should hang our heads in shame when one considers how slow medical publishers have been to improve peer review and access. And shouldn’t academia be leading the charge for better information rather than obstructing it through a system as self-serving and public-interest-damaging as tenure?

In more naïve times, most of us believed that the system of peer review used by trusted, traditional publications like The New England Journal of Medicine was our best bet for trusted sources of current medical information. Now we know better: numerous experts have provided extensive evidence that traditional peer review is unreliable. And using tenure to prop up the bad system is a black eye on our universities.

I have proposed3 that online Reputation Systems could be created to help assess the reliability of crowdsourced information, medical evidence, and pre and post-publication peer review. It’s just one idea, my horse in the race. Regardless of what we wind up with, here’s hoping that everyone who wants to improve health and patient care will innovate to fix our broken peer review system. And while we’re at it, let’s fix tenure, too!

* When you think about it, academic freedom and tenure shouldn’t even be linked: there are other ways (transparency, public exposure, good journalism by people who write about universities, even political pressure) that influence academic freedom more powerfully than tenure.  Return to reference

Suggested Readings, Podcasts, and Websites

  1. Cohen P. Scholars test web alternative to peer review. In The New York Times. Published: August 23, 2010. Retrieved 20:38, August 25, 2010
  2. Smith RW. In search of an optimal peer review system. J Participat Med. 2009(Oct);1(1):e13. Retrieved 20:40, August 25, 2010
  3. Frishauf P. Reputation systems: a new vision for publishing and peer review.  J Participat Med. 2009(Oct);1(1):e13a. . Retrieved 20:42, August 25, 2010&npsp; Return to reference
  4. Frishauf, P, Smith, RW, Gruman, J, Green, L. Participatory evidence: opportunities and threats.  Podcast for J Participat Med. 2010(Aug); Retrieved 20:45, August 25, 2010
  5. Frishauf, P, Smith, RW, Wager, L, Jadad, A, Adler, T. Peer review and reputation systems: a discussion. Podcast for J Participat Med. 2010(Aug); Retrieved 20:51, August 25, 2010
  6. Rothwell, PM, Martyn, CN. Reproducibility of peer review in clinical neuroscience. Brain 123: 9, 2000 (Sept)
  7. MIT Center for Collective Intelligence. http://cci.mit.edu/
  8. Tenure. (2010, August 17). In Wikipedia, The Free Encyclopedia. Retrieved 19:28, August 25, 2010, from http://en.wikipedia.org/w/index.php?title=Tenure&oldid=379372335
  9. Adler TB, Alfaro, L. A content-driven reputation system for the Wikipedia [PDF]. ACM 978-1-59593-654-7/07/0005. Accessed October 17, 2009. [Google Scholar]
 The group has released a WikiTrust extension for the Firefox web browser based on its research that is now available in beta and available at https://addons.mozilla.org/en-US/firefox/addon/11087 accessed October 2, 2009. A review of the extension may be found in Wired magazine at http://www.wired.com/wiredscience/2009/08/wikitrust/ accessed October 2, 2009.
  10. Priedhorsky R, Chen J, Lam SK, et al. Creating, destroying, and restoring value in Wikipedia. ACM 978-1-59593-845-9/07/0011. [Reference to come.] Accessed August 12, 2009. [Google Scholar]
  11. Frishauf, P. The end of peer review and traditional publishing as we know it.  Medscape J Med. 2008;10(11):267. Retrieved 21:35, August 25, 2010
Print

Posted in: hc's problem list | policy issues | reforming hc | research issues

 

 

 

Comments

10 Responses to “A Troubled Trifecta: Peer Review, Academia & Tenure”

  1. Let’s not forget, though, that not all peer-reviewed scientific studies are connected to tenure. Many are the products of the pharmaceutical industry. Following the case of Scott Reuben MD, whose hand was so vehemently slapped for a career of peer-review fraud, does anyone still trust the peer-reviews of pharmaceutical studies? Gilles Frydman said Reuben put “the last two handful of nails into the coffin of the scientific peer-reviewed process.”

    So, on the one hand we have young scientists trying to fulfill their tenure requirements. On the other hand, we have all that money out there waiting to reward pro-pharma testers and reviewers. Crowd-sourcing offers a possible answer to the tenure-requirements problem, but what about the other? What about the guys who cheat on their clinical trials (or like Reuben, just do the whole-cloth thang)? Can anyone doubt that there are kickbacks going to some of the reviewers to ensure that those “results” get published?

    I hate to sound like a conspiracy theorist, but I’m beginning to have genuine doubts about so-called evidence-based medicine. There’s no there there.

    • BillDog, with respect to bias from pharma-supported research: both good and bad science comes from academia and industry, and I know of no evidence that shows one is worse than the other. But a good peer review system would catch bad science whatever its source. For me, a more relevant question about how studies are conducted is that the inclusion and exclusion criteria in drug studies (mandated by the FDA, not industry) may not take into account how drugs work in the real world, where people often have constellations of diseases (ex: hypertension, diabetes, and respiratory disease). For a discussion of this, listen to the podcast at
      http://www.jopm.org/multimedia/podcasts/2010/08/09/participatory-evidence-opportunities-and-threats/.

      And thanks for those comments from bevMD and Susanna Fox. I agree!

  2. bev M.D. says:

    Excellent, thoughtful post. I think the operative quote in the NYT article is:

    “What we’re experiencing now is the most important transformation in our reading and writing tools since the invention of movable type,” said Katherine Rowe

    As applied to medicine, peer review has heretofore been restricted to a ‘good old boy’ network of the elites in academic medicine – who, predictably, have their own agendas and biases. Dissenters were confined to letters to the editor after the fact. Now, an entire article could be published, debated, and even perhaps revised in real time, for all to read. Could this reduce the lethal lag time which has plagued medical advances for so long?

    Dave, I think the biased pharma-funded (and -designed, and even -written) research would be more exposed to sunshine with this new development – so I see this as cause for optimism, not pessimism.

  3. Susannah Fox says:

    Peter, thank you for bringing this issue “home” for the health geek crowd.

    A related story this week is Amy Dockser Marcus’s coverage in the WSJ of a recent publication regarding chronic fatigue syndrome. Here’s her blog post about it:
    http://blogs.wsj.com/health/2010/08/24/pnas-paper-on-virus-chronic-fatigue-syndrome-link-has-its-own-story/

    She writes about how the peer review process slowed down publication of the findings and how members of the National Academy of Sciences are able to choose their own reviewers.

    What stuck with me, however, is that some CFS patients knew about this blockbuster finding months ago, started on the new treatment, and found it to be beneficial. They didn’t wait for the journal article to be published.

    How did they find out? How did they spread the news? If patients found out first, how did they present it to their doctors? If doctors found out first, how did they present it to their patients?

    How do other CFS patients feel now, if they are just hearing about this potentially life-changing news? How long will it take for this treatment to spread to the wider population of CFS patients?

    Alternatively, what are the downsides of rapid publication and dissemination?

    Threaded throughout it all is the internet’s role, patient networks’ role, health professional networks’ role. This is a deep and significant effect of the internet and social networks on health and health care. It is not just CFS, but many condition groups who are engaging in this work to uncover answers faster and disseminate them more widely.

  4. bev M.D. says:

    Oops, I misread Dennis’ name as Dave (as in epatient dave.) Doesn’t change my response above, but sorry for the error.

  5. During my decade in academia, those of us on the tenure track used to quip: “Publish AND perish.”

  6. TILL STANDARDS, SUSTAINABILITY AND SCALABILITY ARE TESTED:
    PEER COMMENTARY IS SUPPLEMENT, NOT SUBSTITUTE, FOR PEER REVIEW

    See: Peer Review Reform: bit.ly/peer-review-reform

    Harnad, S. (1978) Inaugural Editorial. Behavioral and Brain Sciences 1(1).
    http://www.ecs.soton.ac.uk/~harnad/Temp/Kata/bbs.editorial.html

    Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press.

    Harnad, Stevan (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. cogprints.org/2128/

    Harnad, S. (1990) Scholarly Skywriting and the Prepublication Continuum of Scientific Inquiry Psychological Science 1: 342 – 343 (reprinted in Current Contents 45: 9-13, November 11 1991). cogprints.org/1581/

    Harnad, S. (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. cogprints.org/1692/

    Harnad, S. (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. Short version appeared in 1997 in Antiquity 71: 1042-1048. Excerpts also appeared in the University of Toronto Bulletin: 51(6) P. 12. cogprints.org/1694/

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. cogprints.org/1646/

    Harnad S. (2002) BBS Valedictory Editorial (2002) Behavioral and Brain Sciences 24 users.ecs.soton.ac.uk/harnad/Temp/bbs.valedict.html

    Harnad, S. (2003) PostGutenberg Peer Review the invariant essentials and the newfound efficiencies users.ecs.soton.ac.uk/harnad/Temp/peerev.pdf

    Harnad, S. (2009) The PostGutenberg Open Access Journal. In: Cope, B. & Phillips, A (Eds.) The Future of the Academic Journal. Chandos. eprints.ecs.soton.ac.uk/15617/

  7. [...] system to evaluate the quality of their content. My earlier contribution to this blog, “A Troubled Trifecta: Peer Review, Academia & Tenure,” discussed this in more detail, as well as a number of articles and podcasts I participated in [...]

  8. Goodness me!!. I’m really sure it’s not the favourite article on the topic but I really appreciate it.

Leave a Reply