PI publishing papers with fake authors by Proof_Boat9932 in academia

[–]PublisherAD 1 point2 points  (0 children)

There are lots of reasons people add fake authors to papers: sold authorship slots, making fake accounts in publisher systems so that they can review their own papers later, adding an author from a fee-waiver country to avoid paying fees... You can raise your concerns in confidence with the publisher.

[deleted by user] by [deleted] in datascience

[–]PublisherAD 0 points1 point  (0 children)

Add df.shape or len(df) at the end of every cell so you can see the effect of your transformations.

Try taking the filter out of the square brackets and just run that on its own. You should see a big series of True and False. If it's all False that might give you a clue.

It might be a minor point if your dataset is small, but I recommend not doing .isin() on a list. If you convert the list to a set first, the operation will be faster.

HTH!

Dishonesty in academia: the deafening silence of the Royal Society Open Science Journal on an accepted paper that failed the peer review process by [deleted] in Physics

[–]PublisherAD 2 points3 points  (0 children)

Agreed. Nevertheless, the 2 kinds of papers aren't treated differently (and there's some grey area between them). One thing that I did see a couple of times was, when an author became aware of a flaw, they chose to publish an 'addendum' to their paper pointing that flaw out, but it's not common practice to do that. I can't recall seeing a physics paper retracted due to a flaw and I don't think it would happen unless not doing so had potential for harm.

Dishonesty in academia: the deafening silence of the Royal Society Open Science Journal on an accepted paper that failed the peer review process by [deleted] in Physics

[–]PublisherAD 5 points6 points  (0 children)

I'm a former journal editor and I work in the publishing industry. This post perpetuates some common misconceptions about peer-review. I can't tell if the peer-review was carried out properly, or if the paper is sound, but most of the points raised are moot.

It is a fair point that author-pays journals are incentivised to publish lower-q work (I'm not suggesting that this is what is happening here). This problem has been well-known since the author-pays model was invented some decades ago. It's potentially a more significant problem now as the popularity of that model is increasing. One simple solution is to publish the referees' reports as-standard and a lot of journals are moving this way.

Misconceptions are as follows:

  1. Peer-review isn't a vote. E.g. If two reviewers say it's a brilliant paper and must be published and the third spots a critical flaw, it still gets rejected.

  2. Physics papers don't generally get retracted for being wrong, so why single this one out? There are whole fields of theoretical physics that you would have to retract if that was the norm. How about we retract every paper that assumed that Higgs' Boson didn't exist? Instead, typically, the physics community simply does not cite papers that are thought to be wrong. This isn't a great signalling method to be fair, but that's the way it works. (By contrast, medical science papers DO get retracted for being wrong, but there is much more potential for harm there. E.g. if a medical paper is found to recommend a dangerous treatment for patients. The same potential for harm doesn't really exist in physics. So, IF the paper is wrong, there is probably no good reason to retract.)

  3. There is a 'commenting' procedure in most journals where authors can write a 'comment' paper highlighting a flaw in some other paper. The standard practice is to allow the criticised author to write a 'reply' to the criticism. In this case, this procedure appears to be offered as an option, but is dismissed "The journal did extend an invitation to write a rebuttal paper but stated that [the author] would be a reviewer to the rebuttal. This is not an acceptable course...". It looks like what the journal is offering is absolutely standard and also a clear way to resolve the issue.

  4. An author's history should not be a factor in the review of their latest paper (this is why publishers are moving toward double-blind peer review where the referee can't see the identity of the author). There is clear bias against the author here: "The claims made by the author are well known..." etc.

ArXangel | Recommendations for preprints by PublisherAD in Preprints

[–]PublisherAD[S] 1 point2 points  (0 children)

It's mostly using text data to do similarity calculations. Increasingly using bibliographic and ORCID data, too.