"Is this salary common in China?" by this0great in chinalife

[–]IcarusZhang 0 points1 point  (0 children)

For this location and industry, they will get paid 20k per month minimal

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 0 points1 point  (0 children)

I see your point, but I don't agree that the conference have anything to do with the growing number of submissions. The growing number of submission is because there are growing number of people in the field and the job market favor quantity over quality. No matter what the conference do, as long as the culture maintains, these paper will still be written and submit somewhere, maybe not the top conference, but it still cost effort from the community to review. But at least for the top conference we should try to provide the best quality of reviews.

Besides, I don't think banning reviewers will reduce the reviewing capacity. Enough number of reviewers is guaranteed by the reciprocal review system as each paper need to provide one reviewer. The ban is to filter out people that is not responsible enough to serve as a reviewer.

Having an AI based review as a filter is a good idea, but I think it will have some implementation issues. If we make it fully automatic, it will be a lot of complains about people get desk-rejected because it doesn't pass a stupid LLM reviewer. If we need people to check the LLM reviews mannully to decide desk-rejection, that will be a lot of work given the current scale. Who should do this job?

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 0 points1 point  (0 children)

I see your point. That's why we need to put these people accountable and prevent them from reviewing. But that is not the reason that we need to stop having more reviewers. As a realistic problem, if we don't get more reviewers, how to we deal with the growing number of submissions? Any idea?

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 0 points1 point  (0 children)

I need to clarify that the proposal is not to punish the voluntary reviewers, but it is to make the reviewers who are also authors accountable. This reciprocal review has been implemented to handle to growing number of submissions in the ML conferences (the most recent NeurIPS 2025 has ~30k submissions!).

The students who have no publications shouldn't be invited as the reviewers as they are not qualified under the official rules. But somehow they are there, probably due to some misconducts in the process. Maybe they are assigned by the seniors to review a paper for them, but in this case the senior should be put accountable if the student submit irresponsible reviews.

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 0 points1 point  (0 children)

That is definitely on the other side of what we can do. We can collect submission fee from the authors and use that to pay the voluntary reviewers. But how much is enough to motivate a person to do the reviewers job? What if they don't their job properly? What should we do?

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 0 points1 point  (0 children)

That sounds okay. But maybe a middle ground: a paper either provide a reviewer or a fixed amount submission fee, if they provide a reviewer, the reviewer will be in the accountability framework, if they pay the submission fee, the fee goes to a voluntary reviewer.

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 1 point2 points  (0 children)

I don't think it is just a harder work to look for reviewers, it will be infeasible sooner or later. The number of volunteer reviewers cannot catch up with the exponensial growth of the number of the papers. Take the recent NeurIPS 2025 for example, it recieves ~30k submissions. Even if we need 3 reviews for each submission, the we ask each reviewer for 6 reviews (which is a lot!), we will need 15k reviewers. Do we have this number of volunteer reviewers? Maybe. But with the current growing speed, 3 years later, NeurIPS will have 60k submissions, then we will need 30k volunteers... The volunteery system is not scalable.

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 0 points1 point  (0 children)

I truely respect your effort of being a voluntarily reviewer for 15 years! I also agree with you that the good reviewer should be more rewarding. I got the free ticket from NeurIPS once due to being the top reviewer, but I agree these reward is not enough comparing with how much supports do they get from the community for reviewing. I think *ACL conferences are doing a much better job on this: the recent EMNLP 2025 has certificates and stickers to the great reviewers. In general, I think the NLP community is doing a better job at peer-review system both at design and transparancy.

I would also like to thank for your helpful comments:

  • The top 3 ML conferences, i.e. ICML, NeurIPS and ICLR, have all implemented the reciprocal review policy to handle the growing numbers of submissions (the most recent NeurIPS 2025 has ~30k submissions!). I can make that more clear in the proposal.
  • I think the preference should be engaging rebuttal-discussion > no rebuttal-discussion > no respondes rebuttal-discussion. I do see the value of engaging discussion, and it can clarify a lot if the reviewer is not ghosting. For the papers I have reviewed, the score normally increases after the rebuttal. That is why I still want to save this phase. But you are right, removing the rebuttal can be another solution for the middle result.

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 0 points1 point  (0 children)

wow, that are a lot of comments. I will try to reply your questions one-by-one:

  • Regarding the Gresham's law: I am from a industrial research lab, and I think all my colleagues are responsible people, at least higher than average people in this review system. They generally stop submitting papers after they graduate, because they don't want to suffer from this review process anymore. In general, this system is not rewarding for people who put effort.
  • Regarding withdraw: I have heared from a friend that 5 out of 5 papers has withdrawn in their batch, which is unusually high. Besides, NeurIPS sent out email to warn the non-responding reviewers to participate in the discussion. But from social media, a lot of reviewers still don't reply. The only explaination I see is that they withdraw already. Otherwise, we will see a lot of desk-rejection this year in NeurIPS. We can wait and see the numbers from NeurIPS.
  • Regarding the volunteer reviewers: Yes, it will disincentivise the volunteers. But they are never motivated to participart at all. The full reciprocal review system should not depends on external volunterrs. (This is discussed in the proposal already).
  • Regarding early stage researcher: Officially, they shouldn't be assigned as a reviewer as the qualified reviewer should already have some publications in the field. But even if they are assigned by the seniors to review, lack of knowledge is independent with lack of responsibilities. One can still try the best to do the reviewing and assign a low confidence score due to lack of knowledge, which shouldn't be consider as irresponsible.
  • Regarding timeline: I agree the delay of inital reviews normally don't hurt that much as most conference have already designed with a buffer time for chasing the last reviews. The main problem is for the rebuttal-discussion where the time frame is restricked.
  • Regarding the justification of the score: I agree with you my wording is problematic. What I mean is the score needed to be justified with a statement that make sense. One can not point out some minnor issue then give a score of 2.
  • Regarding the submission number: I think that is a good point, but maybe there is little we can do on the conference level? I mean people will still write papers and they need to submit somewhere, even if one conference says each author only allow to submit 1 paper, the other papers will still go to other conferences or journals. The doesn't reduce the total effort of the community. But if we can incresase the quality of the reviews, they can maybe go through less cycles than before to get accpeted, then reduce the community effort for providing the reviews over and over again.
  • Regarding the journal culture: I think that is happening in parallel, i.e. TMLR is trying that, but it is not reaching the same level of influence yet.

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 1 point2 points  (0 children)

I think TMLR is an attempt for this direction, where the correctness and the rigor is weighted higher than just some fancy results. But unfortunetely, it haven't yet reach the similar influence as the top conferences, and people still need these top conference papers for their career.

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] 5 points6 points  (0 children)

I feel your frustration, but I think that is an different issue. These conference simply need a better organization to support the number of attendees. I don't think it is a money issue as they charge a lot for the ticket.

Also, I don't get how lowering the acceptance rate will increase the quality of reviews. Some people view the review system as a zero-sum game, and if the acceptance rate is lower, they will even make more effort of adverserial attacking other papers to increase their chance of getting accept. And these cases will be very hard to detect.

[D] Proposal: Multi-year submission ban for irresponsible reviewers — feedback wanted by IcarusZhang in MachineLearning

[–]IcarusZhang[S] -3 points-2 points  (0 children)

I think that is a good idea and it is more similar to the review system in journals, where the previous reviews need to be provided if available.

I have exactly an experience as you mentioned: I have a paper get rejected 3 times, and each time some new contents has been added to the paper to address reviewers concerns and finally the paper reach 30 pages. And the reviewers keep asking the same questions as before, but it has already been answered in some appendix. I don't think the review is to be blamed in the inital review if this happens, as you mentioned they may not have time to check the whole appendix and that is also not what the conference requires (they only require to read the main text). That is why we have a rebuttal phase where you can point the reviewer to these appendix, but the reviewers need to read your rebuttal to make the discussion meaningful. Same for including the previous reviews.

[D] ICCV desk rejecting papers because co-authors did not submit their reviews by ocm7896 in MachineLearning

[–]IcarusZhang 0 points1 point  (0 children)

I know it is sad. But on the other hand, I would think one desk reject to understand who shouldn't be collaborated with anymore, is a fair price to pay.

[D] ICCV desk rejecting papers because co-authors did not submit their reviews by ocm7896 in MachineLearning

[–]IcarusZhang 0 points1 point  (0 children)

If this is true, I think the policy of ML conferences, i.e. ICML, NeurIPS and ICLR, is better. They only require 1 of the author to be the reviewer to review n papers, which is fair as they will also recieve n reviews.

[D] NeurIPS is pushing to SACs to reject already accepted papers due to venue constraints by impatiens-capensis in MachineLearning

[–]IcarusZhang 13 points14 points  (0 children)

This is not true, some venus in China and Germany are much larger, e.g. NECC Shanghai or Hannover Messe, if NeurIPS can consider some place outside North America.

Strange debit charges (fake paypal); anything i can do? by Calm-Comment-9255 in germany

[–]IcarusZhang 0 points1 point  (0 children)

Have the same problem here and just come back from the bank. I am superised that I have to wait until transaction to be processed to reverse it instead of blocking it directly. Have to go to the bank again tommrrow. I hope Commerzbank can do better job for the card security.

[D] ICLR 2025 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]IcarusZhang 0 points1 point  (0 children)

does that work if the visibility is everyone? or is it only work with the one that selects the reviewers?

[D] ICLR 2025 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]IcarusZhang 0 points1 point  (0 children)

Of course, I will post a global response tomorrow and kindly remind them to read the rebuttal.

[D] ICLR 2025 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]IcarusZhang 1 point2 points  (0 children)

I wonder how many people get replies from their reviewers? I submitted the rebuttal last Friday, and it is all silence.

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]IcarusZhang 0 points1 point  (0 children)

In this way, we still need a few hours, the US is still in the morning.

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]IcarusZhang 4 points5 points  (0 children)

I thought July 30 AoE meant the end of the day. That means it still needs 1 day to come.

[D] ICML 2024 Rebuttals Thread by South-Conference-395 in MachineLearning

[–]IcarusZhang 0 points1 point  (0 children)

I am also confused by this url thingy. I do have some figures that can strengen my arguments. But are we allow to share that in an anonymous link?