Does the Peer Review Process Need Blockchain?

A new generation of scientists raised on Reddit and Instagram are exploring crowdsourced models of scientific peer review.

Much of today’s science seems to work like solving crossword puzzles, methodically filling knowledge gaps with pencil and paper, while Silicon Valley companies and their new products rush past like flashy Instagram Stories. A new generation of researchers is intent on transforming the process.

When we wrote about decentralized science, we noticed how new blockchain-based IP is overturning the way scientists work with one another, through distributed autonomous organizations (DAOs) like VitaDAO. Since then, Big Pharma has taken notice, with Pfizer Ventures recently investing $500,000 into blockchain-based longevity research.

Some scientists are now getting DAO-based grants for their research, like Vera Gorbunova, who received $350,000 to screen for longevity-enhancing molecules. In an interview she told that she especially likes the decentralized approach because “this way many people can contribute.”

But maybe Silicon Valley’s video game-like speed is about more than how they are funded. When product managers in the fast-paced high-tech industry need feedback on their ideas, they just… well, release it. Whether it’s tweaks to an ecommerce web site or a new feature for your favorite game—somebody has an idea and they ship it. You know the drill: No intermediary required. Even when there are intermediaries (like the Apple App Store), the submission guidelines are simple and optimized for speed. The only thing approaching “peer review” are customer reactions—product ratings, sales, and beta user feedback. Good reviews make more sales—which funds the next round of feature development.

What if science could do the same thing? A lab researcher learns something about a new molecule. Why wait to print it in a new publication? What if science had its own up-down rating system, not just for big, final discoveries, but for everything that researchers produce? That’s the idea behind several new peer review systems being proposed.

How do you rate science?

Of course, good science, especially translational science, depends on accuracy in a way that the latest gizmo doesn’t. If your favorite app has a few bugs, they’ll be fixed in the next update. Too many bugs and their competitor will get your business instead. Scientists, by contrast, need some assurance about their data and conclusions before they can progress to the next translational step.

Traditionally they receive this assurance through the academic publishing system, which relies on editors and reviewers who look over the results and provide some quality control before a new discovery makes it to the rest of the world. This works, but it’s often slow, and it leaves a lot of great information behind.

A big reason for the slow pace is the review process itself. Academic journal editors rely on an unpaid army of reviewers to vet each piece. But this system of “peer review,” which critics say was designed for the pre-internet days when it was expensive to print and mail copies of journals, slows the feedback process. Readership is limited until the publisher and a few nameless, generally anonymous reviewers agree to release it. If one of them disagrees, the paper—along with whatever comments were made while evaluating it—may never make it to the rest of us. That hard, thankless, and unpaid task makes it difficult to find enough qualified reviewers to process the two million manuscripts winding their way to print every year.

“We are relinquishing the traditional journal role of gatekeeper in favor of a new approach that restores autonomy to authors.”

One publication, the open-access, nonprofit biomedical and life sciences journal eLife, threw down the gauntlet last month and announced they will move to an exclusively “open peer review” system come January 2023. In open peer review, reviewer comments are public and carry attributions. But the huge decision eLife made is not simply implementing open peer review, but doing away with peer review as a prerequisite for publication altogether. They will no longer even ask reviewers to weigh in on whether an article should be published.

In other words, the go/no-go publication decision will not depend on peer review at all. Rather than delaying publication until after peer reviewers are found, gathered, and cajoled to make their decisions—as almost every journal does now—any article submitted for publication under the new model will instead be immediately moved to publication once it’s deemed worthy of review after just a short “assessment” by editorial reviewers. The actual substantive peer review will take place de post facto, consisting of public comments about its strengths and weaknesses from selected reviewers. This could be the start of a major change in scientific publishing.

“We are relinquishing the traditional journal role of gatekeeper in favor of a new approach that restores autonomy to authors and ensures that they will be evaluated based on what, not where, they publish,” the journal’s editorial board wrote in an editorial

Another major change in scientific publishing could come from the same blockchain-based infrastructure that’s enabling the rise of the rest of decentralized science. Washington University faculty member and VitaDAO core contributor Tim Peterson proposed his own peer review alternative, called The Longevity Decentralized Review (TLDR), and is assembling a team of editors to begin reviewing papers on longevity and aging.

The Longevity Decentralized Review (TLDR)

TLDR works a lot like Reddit: First researchers post their work publicly, either directly or to numerous so-called “pre-print” servers like bioRxiv or medRxiv. These have been around for several years but became much more influential during the COVID-19 pandemic because of the speed with which they could bring research to other scientists. Reviewers get paid by the TLDR site, which is funded through charitable donations and from anyone who would like their manuscript peer-reviewed. VitaDAO is one of the TLDR backers, offering $VITA tokens for peer review of longevity-related projects of interest to VitaDAO. It’s anybody’s guess whether this will result in meaningful income to reviewers, but it’ll be more than the zero dollars and zero cents they earn now.

Test driving a new system

I recently tried one blockchain-based peer-review site, ResearchHub, which, like Peterson’s new TLDR, lets anyone post a scientific article for comments. Users can write reviews of the articles and then upvote or downvote the various comments. Users who pass a certain threshold of trustworthiness, based on the reviews they receive or the ones they write, are given ResearchCoin (RSC), an internal currency, to “donate” to particularly useful articles, comments, or fellow researchers. ResearchHub, originally conceived in a 2019 Medium post by Coinbase co-founder Brian Armstrong, now boasts thousands of members and an active community of reviewers.

To try the system, I first wandered through some of the “hubs,” where I noticed a social science publication I thought was interesting. I posted my rating, along with a short comment. The author immediately responded with an upvote and thanked me with 150 RSC. 

Later I submitted my own essay describing some of my thoughts on how to improve science, and I quickly received similar feedback, some of which I upvoted and rewarded in turn. Ultimately I received about $13 for my trouble. (See the current price of RSC)

This lets you be pseudonymous if you want—reviewing multiple papers under a name whose identity might be unknown even to the publisher.

Can blockchain-based solutions be gamed? Sure, like the current system, hypercompetitive people will find ways to cheat, but in the radically transparent blockchain, that’s pretty hard. You can’t fake a crypto address, so although you can be anonymous, you can’t pretend to be somebody you’re not, and whatever you do is permanently associated with that address. More interestingly—and unlike the current system—this lets you be pseudonymous if you want—reviewing multiple papers under a name whose identity might be unknown even to the publisher. 

What stops ignorant or malicious amateurs from “reviewing” data about vaccines or climate change? Well, nothing—except reputation. Comments—and reviewers—can be downvoted as easily as they are upvoted. If anything, the more transparent playing field might better expose people who simply don’t know what they’re talking about.

Who owns the IP?

It’s far from clear that these peer review alternatives will have much effect on big university and research-directed science, but it’s already raising questions about who owns the work. If a full-time researcher receives DAO funding, does the money go to the researcher or to her institution? 

Today a researcher’s IP generally stays with the university, which after all owns the building, employs the scientists, and foots the legal bills. Outside entities can license existing IP or hire university researchers as consultants to create new IP the companies would own, simply paying them for their time.

When asked Tim Peterson what happens when DAO funding goes to individual scientists rather than their university employers, he admitted he didn’t have an answer yet. “It’s going to boil down to the extent to which the universities own the brains of all their talent.”

And where is that talent going? Maybe older scientists will prefer the tried-and-true crossword puzzles of the past, but to younger researchers raised on the instant feedback of Instagram and Reddit, I’ll bet the future looks a lot more like TLDR and ResearchHub.

Go Deeper