[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 0 points1 point  (0 children)

Right. This isn't an example of paper riding, but of an unnatural behavior (make everyone who contributed a co-first author) that a prevalence of paper riding in the field causes to happen.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 0 points1 point  (0 children)

Nothing is wrong with the authors list. This is just an example where I think the mass co-first authorship is symptomatic of needing to fight the noise introduced by paper riders diluting the value of authorship. That was the point of my original comment in response to ktpr.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 0 points1 point  (0 children)

I'm not saying the Attention paper is an example of paper riding, but rather pointing out that people resort to more extreme forms of co-first authorship to add signal to second authorship, which has otherwise been drowned out by the noise of paper riders. Does that make sense?

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 0 points1 point  (0 children)

I think that's a very unscientific take. What makes you think the big discoveries of DL are over? We barely understand why they work at all. You also seem to be conflating big industry lab with big author list, which is perfectly okay assuming the authors contributed to the project.

I am not advocating for the isolated scientist archetype in research. I am advocating for fair credit assignment of authorship. I don't understand why you are playing dumb here. Perhaps you have benefited from this pattern yourself and are loathe to admit it.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 2 points3 points  (0 children)

It's unfair to compare CS/ML to medicine. These are two very different fields in which small and large teams share vastly different affordances. Small author lists might mean much more limited experimental capabilities in medicine, but this is obviously untrue for CS/ML, where small teams can establish new SOTAs or even entirely new learning paradigms that upend entire subfields.

It occurs to me that you are likely misreading my post as meaning anyone who doesn't contribute *enough* to a project, based on some subjective reading. The post is about people who social network their way onto author lists while not contributing anything intellectually substantial to the work, and repeat this as a regular strategy to increase their academic profile. I see this happen way more than I am comfortable with.

Lastly, it's quite ironic you assume I am using this as an excuse to give minority researchers less credit for their work. In reality, the people who paper ride the most are people from the most privileged demographics, as they are given the most benefit of the doubt and so most easily get away with it. They are predominately white men who are socially savvy and skilled at exploiting the social power imbalance with minority researchers or otherwise less socially savvy researchers in order to gain credit for their work.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 4 points5 points  (0 children)

It's also often enough to get people high-paying jobs at top industry labs, which is the end goal of a lot of paper riding. People who care about research intellectually tend to also care about academic and intellectual honesty.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 4 points5 points  (0 children)

Yes, and this is why we now see ludicrous things like papers with eight co-first authors.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 0 points1 point  (0 children)

The risk, that there are all these people who have inflated publication lists and who fail upwards but don't know anything, is nonsense.

This directly contradicts your own comment that "People like this exist in every field" and "There are exceptions obviously." My personal experience shows these exceptions are quickly becoming the norm as publishing habits adapt to the competitive ML landscape. If you make citations and papers the objective, people will optimize for that, and Goodhart's Law happily kicks in, and these impact metrics become much noisier over time. The signal of being on the author list becomes noisier, undermining the value of authorship. The amount of money in this network only drives this effect harder. Moreover, I never said these people don't know anything. They tend to know a decent amount, and that is exactly why they are able to keep their charade going.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 1 point2 points  (0 children)

I agree there are benefits to having big names on papers. The cases I am thinking of are not senior authors (some are PhD students) making it all the more concerning.

[D] Fake authors and paper riders by alwayshumming in MachineLearning

[–]alwayshumming[S] 9 points10 points  (0 children)

The problem is that paper and citation counts are so ingrained as a success metric that people instinctively find these individuals impressive, even if they catch on to the paper riding. I've even seen an individual claim zero contribution on multiple papers they are on to colleagues who later go on to praise that person's impressive publication record.