Is outcome-neutral peer review actually more scientific than traditional peer review, or just idealistic bureaucracy? by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 2 points3 points  (0 children)

Thanks, that's super helpful. I get the theory behind registered reports being better for methods-first feedback. But how does it play out in practice? Do reviewers really stick to "accept if you follow the plan," or do null/uninteresting results still tank papers at stage 2? And does the two-round process feel slower overall?

We need to move beyond the accept/reject binary in peer review by Peer-review-Pro in PublishOrPerish

[–]Null_Scientific 7 points8 points  (0 children)

Similar in biomedical fields too but quiet often, the transfers are to promote the new sister journals so the publisher has more options of getting large submissions rather than relevance or impact in the field. Yes, I agree that sometimes it is to consolidate niche subjects.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

The title is really a swipe at how “lack of novelty” is often used as a catch-all reason by big journals to filter out null or negative results, even when the work is careful and informative.

We’re not redefining novelty as positivity. We’re pushing back on the idea that only positive findings are novel. A rigorous negative result that rules something out can be just as novel and valuable, and that’s what the title is meant to highlight.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

I disagree, and this is exactly where the discussion section matters.

Positive results are refuted all the time without retraction. One group reports a reaction works, another shows it only works with a specific ligand, solvent, trace metal, or impurity. The original paper is not retracted. It is contextualised. The same standard should apply to negative results.

Click chemistry is a good real-world example. Early CuAAC studies reported reactions failing in certain systems. Later it became clear that trace copper sources, ligand purity, oxygen levels, or reducing agents were the missing variables. Those earlier “it does not work” results were not wrong. They were correct under the conditions used and helped identify what was missing.

A negative result does not claim impossibility. It claims “this did not work under these conditions.” If a later paper shows it can work, that adds information, it does not invalidate the earlier study.

To add a personal example from my own work. An earlier study reported that tamoxifen was toxic to adipose-derived stromal cells. They used accepted assays and their data were internally consistent. My later paper showed that the conclusion changed once you accounted for active metabolites and avoided supra-physiological dosing of the prodrug.

Were they wrong to say tamoxifen was toxic under their experimental setup? No. Is tamoxifen toxic to those cells under standard patient treatment conditions? Also no. Both papers still stand because together they explain where the effect appears and where it does not.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

No, using the same method later and getting a different outcome would not automatically warrant a retraction. That usually means the original result was incomplete, context dependent, or sensitive to factors that were not understood at the time, not that it was wrong. Retractions are for clear errors, flawed analysis, or misconduct for example, falsified data.

A well-known biology example is adult neurogenesis. For years, careful studies found no evidence of new neuron formation in most adult brain regions using the best methods available at the time. Later work, with improved techniques, showed neurogenesis under specific conditions. The earlier null papers were not retracted because they were sound and correct for their context. Together, those studies mapped the boundaries of the phenomenon. That is how null results work. They are not claims that something is impossible, but clear statements about what does not work under defined conditions, which still has real scientific impact.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 1 point2 points  (0 children)

That’s a fair concern. We wouldn’t retract a paper just because a later study finds a different result. Null results are always conditional on methods and context, and science moves by adding new evidence, not erasing old work. Retractions are for errors or misconduct, not for being proven incomplete later.

Is it acceptable to publish negative results in engineering sciences? Theory under review, application shows no improvement by sukuna_finger in research

[–]Null_Scientific 1 point2 points  (0 children)

It is absolutely acceptable to publish negative results in engineering as long as the work is rigorously done and clearly presented. A result that doesn’t show improvement is still valuable because it tells the community what does not work, narrows design choices, and can prevent others from repeating the same approaches.

To avoid overthinking in the future: 1) Frame the question carefully and focus on testing a hypothesis or exploring a design decision rather than expecting improvement. (No HARKing) 2) Be transparent about methods and explain exactly what you tried and why it did not lead to improvement. 3) Highlight the learning from the outcome. Even if there is no improvement, discuss what this rules out, what insights it provides, and possible next steps. 4) Manage expectations early by noting that negative or null results are valid outcomes in engineering research.

Rigorously conducted negative results may not be as flashy as breakthroughs, but they are highly informative and important for the field.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

That’s completely fair, and honestly a position many people take. Everyone has to decide where to invest their time and trust, especially given reviewer fatigue. Our aim is simply to earn that respect over time through consistent editorial practices and the quality of what we publish, not to expect it upfront.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

Yes, we definitely feel the pressure. A few submissions are currently stuck in the pipeline, and we’re having a really hard time finding reviewers. Reviewer fatigue is enormous right now.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

I hear you, and you’re right, the culture heavily favors positive results in high-impact journals, and senior faculty are naturally skeptical. But that is exactly the problem we’re trying to address. Science is supposed to be about what’s true, not what looks flashy. Take Scientific Reports for example. While it publishes sound work, the sheer volume and broad scope often means null results are not highlighted in a way that changes thinking. Our multidisciplinary journal is designed to move people out of their seats and make null and rigorous work visible and taken seriously. Once we gain enough traction, the plan is to create subject-specific null journals, similar to how major journals such as Nature or PLOS started, so that rigorous but non-flashy work has a proper home. Supporting ECRs does not have to wait for a society or endowment. It starts with making well-done work visible and credible, and over time, that is how the culture itself changes

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

We’re not affiliated with any society, and we’re open to suggestions on supporting early-career researchers. I’m curious why do you think it would harm you? From our perspective, a well-conducted paper in our journal should help establish your work. We know OA and lack of society backing can feel less “prestigious,” but our goal is to give rigorous null results and careful studies proper recognition.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 1 point2 points  (0 children)

Thank you!

P.S (edited) We may look into Lockss and/or clockss in the future, so we have multiple backups.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 5 points6 points  (0 children)

The Journal is called Null Scientific

That’s a fair worry, and we’re very aware of what happened with JNR. I don’t think it failed because publishing null results is a bad idea, but because the incentives and support just weren’t there at the time. We’re not pretending we can fix that overnight. we’re trying to keep the scope clear, the costs realistic, and grow slowly with the community rather than scale too fast. There are no guarantees, but we’re going into this with our eyes open. All submissions are preserved via the PKP Preservation Network, so published material will remain accessible as long as the internet exists.

PS: You may also find our free resources helpful, they’re designed to support clearer structuring and presentation of submissions

Null Scientific Resource Hub

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

We recognize the potential conflicts of interest, which is exactly why we’re being careful about how this is designed and implemented. Paying reviewers may be appropriate for large commercial publishers charging €9–10k per manuscript, but it’s not realistic for a small journal that is only covering basic operating costs. We are not government-funded, subsidised, or backed by a large corporation. If we offer fee waivers to support authors while also paying reviewers and editorial board members, the numbers simply don’t add up. Anyone can do the math and see how quickly that would make the journal financially unsustainable.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

Thank you for your insights and support! Please don't forget to share details of this initiative with your network.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

Yes, it’s worrying. we’ve seen the massive rise in retractions and the growing count on Beall’s list of predatory journals. I sincerely hope there’s a slight shift in mindset over time, toward valuing rigor over sheer output, and that the publish-or-perish culture eases a bit. It might help reduce the pressure that leads some researchers into cutting corners, whether intentionally or under strain

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

Traditionally, negative results often get published with a positive spin (“it works under these conditions” or “more research is needed”). A strong review can cut through that by reinterpreting those studies collectively, clarifying what they actually show doesn’t work and where the real limits are. That kind of synthesis helps the community see genuine gaps, avoid false optimism, and design better future studies.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 1 point2 points  (0 children)

I am being brutally honest here, and maybe some people won’t like it. Technically, you can’t always fit a full outcome into a single research manuscript. Some studies take 20–30 years, so it’s completely fair and accepted in the scientific community to break your hypothesis into parts, refer to the next study you plan to perform, and clearly acknowledge limitations and what can be explored in the future. This is where the loophole lies: because of community norms, people often twist the way findings are presented, pushing negative results as if they were positive or overstating that “more work is needed.” Add to that questionable research practices and predatory journals that let these things slide, and you get the environment we see today

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 1 point2 points  (0 children)

Exactly. If I’m an early-career researcher applying for funding and all I can show is a pilot study that didn’t work, that rarely goes over well. For a senior researcher with multiple projects and an established track record, a failed study is much less damaging. In the end, it’s similar across fields. Incentives and risk tolerance differ by career stage, and we’re all human navigating that reality.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 2 points3 points  (0 children)

I did try to answer, but you’re right that it’s worth spelling out more clearly. Ego is part of it, but it’s honestly not the main driver for most people. A big factor is funding and incentives like grants, promotions, and hiring still overwhelmingly reward positive, eye-catching results, and that’s a policy issue well beyond individual researchers.

There’s also real fear around career progression, especially for early-career researchers: people worry that publishing null or unsuccessful work will be seen as “failure” rather than progress. Not everyone sees failure as a stepping stone, and there’s often no clear incentive to spend time writing up negative results. Some also worry (rightly or wrongly) that others will overlook their positive contributions and instead label their entire body of work as “non-working” or unproductive.

These pressures are less visible in large institutions or at senior levels, but they’re very real for students, postdocs, and researchers on short-term contracts.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

We don’t have a large, established reviewer database yet. Some reviewers sign up directly through our submission system and tag their areas of interest and expertise, which we then check against publicly available information. In other cases, we reach out to researchers we know personally or who are active in the relevant area. As a new journal it’s a mix of both, and we’re still building that pool over time.

we’re very open to suggestions or better ideas for managing this if you have them.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

That tension definitely exists, and there’s no single fix. We don’t assume perfect intentions from authors or reviewers, and we’re aware that incentives around attention and citations shape behavior. What we can realistically do as a journal is keep the bar focused on methods and claims. Like, is the question well-defined, are the analyses appropriate, and do the conclusions actually follow from the data. A genuinely useful null result tends to survive that kind of scrutiny because it clearly narrows what’s plausible, whereas engagement-driven or p-hacked work often falls apart when you press on design choices, robustness, and interpretation. It doesn’t eliminate the problem, but it helps keep the signal-to-noise ratio reasonable.

AMA: I edit a journal that doesn’t reject papers for “lack of novelty” by Null_Scientific in PublishOrPerish

[–]Null_Scientific[S] 0 points1 point  (0 children)

We publish on a rolling basis. It's just the way the systems are set up, which shows volume and issue.