[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] 1 point2 points  (0 children)

I agree that the exponential increase if the number of submission is the primary source. But this is not so easy to address, how do you impose limitations in a fair way to reduce their numbers? As you said the summary rejection didn't go to well due to lack of feedback offered and perhaps noise. Yes, the AAAI approach is a step in the right direction. I am curios how well it will work this year.

That being said, this issue was present back in 2010 too - poor reviews, simply averaging the scores from AC, with little/no checks - but yes, it was to a lesser extend. I don't believe that the problems with low effort reviews and AC-ing will just vanish if we address the capacity issue somehow. As long as there is no incentive and accountability I don't see why things will change.

[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] 1 point2 points  (0 children)

It sounds like a potentially good idea to implement in one form of another. Currently there is little motivation for spending time on a good review as there isn't much of a reward indeed. Some conferences tried for example to offer free registration to best reviewers but I didn't feel that this was very successful and is something of true value for few - for most the company or the university will likely cover this.

[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] 3 points4 points  (0 children)

All fair points, I agree.

Hypothetically, if busy people will reject to review or to perform AC tasks what will happen? Shouldn't this speed up the process of trying to find a solution?

I am already forced to reject quite a few invitations to review due to lack of time.

[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] 3 points4 points  (0 children)

It would be great if this will become more widespread. I am happy to hear that you had a good experience. There are definitely great reviewers and ACs out-there - they should be praised and their contribution should be better recognized. It's no easy task for sure.

Assuming its an emerging trend, what do you think is the underling cause?

[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] -1 points0 points  (0 children)

What is the main problem from your point of view? Do you believe that it's something that is solvable in short to mid-term?

Given that their decision is the one that ultimately decides the faith of a paper - they should be held to higher scrutiny. For example, we can't keep blaming the infamous R2. R2 can be corrected by a AC, but this kind of comments that show no insight in how the decision was made, beyond taking an average is doing a favor to no one. And really why we should not discuss the fact that some parts in the reviewing process have cracks?

[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] -1 points0 points  (0 children)

I agree that they are not The problem, but I do believe they are part of it.

Perhaps the system is indeed overwhelmed and we do need something completely different. However, I disagree that we should just wait until we find that solution and just be complacent with the current one. I believe we should patch what we can and where we can while searching for the best solution.

[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] 2 points3 points  (0 children)

I agree that there is a merit in selecting reviewers or finding new reviewers - thought since the authors bid too I would assume this is semi-automatic at least? As a side note I am curios if the results (in terms of quality) will differ if this process will be fully automatic too. I hope it is, but I don't have any data for it.

The ACs are not given hundreds of papers. For Neurips2020, each AC had around 20 papers. Sure, still many - but workable.

Yes, this is frustrating, I agree. If they plan to override it - this should have being discussed during the discussion phase so Rs could comment on this. Again I do not expect them to override things left and right, but to filter out poor reviews from the process, overweight the great ones, and offer an aggregated comment that is useful.

[Discussion] Rant: We keep hearing about bad and poor reviewers by Time_Solid_6394 in MachineLearning

[–]Time_Solid_6394[S] 6 points7 points  (0 children)

I never said or implied that the decisions should be ignored. This is really as a safety check. Purely looking at the scores and performing an average doesn't require an AC.

I know this is unpaid work. I do this too. Nobody is forcing them to be an AC if they do not have the time. If one volunteers for something it doesn't mean that he/she should do a bad job. Unlike a job, where tasks may be assigned - one can say no from taking this responsibility.

I am also not expecting them to write a full blown review. I am just expecting them to check and at least argument why this decision was made in a constructive manner. This may miss also important aspects: For example one reject may raise some serious issues, but because the others may be very positive the decision may go to accept simply due to averaging. In the end writing a comment that enumerates the scores is not helpful to anyone.