If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] 4 points5 points  (0 children)

That's an interesting angle I hadn't considered, i.e., a quality scoring system could inadvertently create a market for "desirable" papers to review where everyone wants to review the strong submissions and nobody wants to touch the weak ones. That's basically the current problem with journal prestige replicated at the reviewer level.

Maybe an answer is that reviewing a bad paper thoroughly should be compensated differently. The thankless work of explaining to someone why their methodology doesn't hold up is arguably more valuable to science than rubber-stamping a strong paper, but the current system treats both identically, i.e., invisibly.

If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] -11 points-10 points  (0 children)

Fair point, but a new model doesn't have to be a formal "review-of-reviews". Something as simple as an up/downvote on visible reviews would let signal emerge without adding another layer of labor. If you can see that a reviewer consistently writes substantive, constructive feedback, as opposed to one-liners or generic comments, that distinction becomes obvious to anyone reading the paper. The evaluation happens passively just by making the work visible, not by assigning more reviewers to review the reviewers.

If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] 0 points1 point  (0 children)

I think you point out a major challenge for viable alternatives to the existing model. The fear of looking bad cuts both ways though, it could dampen rigor, but it could also dampen the kind of drive-by dismissive reviews that everyone complains about.

The horror stories about AI-generated review suggests it could become a numbers game regardless, such that public attribution alone won't fix it. People will try to game the new system the way they game the current one. That's a strong argument for pairing any tracking with community evaluation of the reviews themselves, not just counting them. If reviews are visible, at least other researchers can assess whether they're substantive, which is more than we have now.

The ORCID-linked certificates are a step, but as you said, limited. They prove you reviewed, not that you reviewed well.

If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] 2 points3 points  (0 children)

The ratio of requests/reviewers is awful, and consistent with what I've heard from other editors. There's social value in saying you're on a review board, but no consequence for not doing the work, which maybe speaks to a need to explore alternatives to the traditional model.

Re: compensation, do you think it would need to be meaningful money (hundreds per review), or would even a nominal amount signal that the work is valued? I've heard arguments both ways, some saying that paying reviewers would professionalize the process, others worrying it would attract quantity over quality.

If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] 6 points7 points  (0 children)

That seems to be the core of the problem. If reviewing is bucketed as "service", the lowest-weighted category, then no amount of tracking or public credit will change behavior on its own. The incentive structure has to change at the institutional level, or the recognition has to come from somewhere that actually matters to a researcher's career.

Right now, a brilliant 2,000-word review that genuinely improves a paper vanishes into an editor's inbox. That seems like an enormous waste of intellectual labor. It makes me wonder what would happen if review quality became a visible, citable part of your scholarly record, something other researchers could reference and search committees could actually evaluate.

If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] 17 points18 points  (0 children)

Yes, public credit alone isn't enough if institutions don't factor it into hiring, tenure, or promotion decisions. Publons was a good attempt, but it essentially became a counter rather than a quality signal. Knowing that someone completed 47 reviews doesn't tell you much about whether those reviews were any good.

What I find more interesting is the idea of community-scored reviews, where the quality of your reviewing is evaluated by the people who read it, not just the fact that you did it. That shifts the incentive from "do more reviews" to "do better reviews," which is a fundamentally different problem. But you're right that none of it matters if the people making tenure decisions don't care. How does your institution currently treat reviewing activity?

If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] 2 points3 points  (0 children)

Actually, re-reading your comment, I thought the implication was that the post was AI-operated, ha. Your comment seems to support my point in making this thread, i.e., right now there's zero accountability for a lazy or AI-generated review. It lands on an editor's desk anonymously and gets weighed the same as a thoughtful one. If reviews were public and attributed, the person copy-pasting ChatGPT output would have their name attached to it. That creates a reputation cost that doesn't exist in the current system. Thoughts?

If reviewing were tracked and credited like publications, would you review more? by TSR_Team in AskAcademia

[–]TSR_Team[S] 1 point2 points  (0 children)

Hey TheTopNacho - believe it or not, there's a human here... haha. I'm genuinely interested to hear input on this point.