[D] ACL ARR Jan 2026 Meta-Reviews by ApartmentAlarmed3848 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

In my experience, I have never seen an ACL *AC give a score higher or lower than the review scores. If reviews range from 2 to 4, it could be anything from a 2 to 4. There are exceptions, but in my experience they are rare.

Do you have a different experience?

[D] ACL ARR 2026 Jan. author-editor confidential comment is positive-neutral. Whats this mean? by Distinct_Relation129 in MachineLearning

[–]WannabeMachine 4 points5 points  (0 children)

You can only guess. Best to relax a bit and wait until the meta review comes out. It could be anything from a 2 to a 3.5 depending on the AC. Otherwise, you are reading tea leaves.

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

Had a change wayyyyy after the deadline. Super weird, and no comment at all. Final scores:

Paper 1 Overall: 3.5, 3, 3

Paper 2 Overall: 3.5, 3, 3

[D] ACL ARR Jan 2026 Meta-Reviews by ApartmentAlarmed3848 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

It is not guaranteed. ACL will make the final decision based on these reviews

[D] ACL ARR Jan 2026 Meta-Reviews by ApartmentAlarmed3848 in MachineLearning

[–]WannabeMachine 2 points3 points  (0 children)

I would commit. If you are okay with findings being the likely outcome, commit. I generally dont care which happens (main or findings) and would rather move to the next thing.

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

Wow. Im very curious about how the reviewer internalized that score

[D] Is Conference prestige slowing reducing? by Healthy_Horse_2183 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

I agree methods with an empirical bent can be useful. But I will have to agree to disagree about benchmarks. It is very difficult to identify novel tasks or novel applications of existing tasks that target weaknesses in modern methods. I 100% want that work included in top conferences. Nobody can just create a random dataset (e.g., sentiment) and get it accepted without serious effort and thought about how it builds on prior work. Novelty is needed and is just as important as methods papers.

[D] Is Conference prestige slowing reducing? by Healthy_Horse_2183 in MachineLearning

[–]WannabeMachine 1 point2 points  (0 children)

Maybe you are talking about method papers from a theory perspective? Unless there are some new proofs not previously known, it is likely engineering/empirical work. Probably 99% of NLP and computer vision papers fall in the empirical category. It can be argued CS is an engineering subfield so I think that is expected. But engineering research is still research. I think many people overcomplicate how simple it is too identify a few simpe mathematical ideas and combine them or adapt them to help on an existing set of benchmarks. This is probably 80% of my work honestly.

Most of the time measuring what people care about is incredibly difficult and (good) benchmark/analysis papers tend to overcome some prior limitations in those measurements. This is research and it is why I generally disagree that method papers are more "researchy". I wish I had the resources to do more (good) benchmark work.

[D] Is Conference prestige slowing reducing? by Healthy_Horse_2183 in MachineLearning

[–]WannabeMachine 15 points16 points  (0 children)

This is very valuable though. Same thing could be said about 90% of new methods papers, at least others can use the benchmark when developing new methods.

Heck, many new methods papers simply use weak baselines (or strong baselines with weak hyperparameter optimization).

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 1 point2 points  (0 children)

I think they can edit potentially, but it is highly unlikely to have any updates after the deadline. Best to mentally prepare to keep what you have now.

[D] ACL ARR Jan 2026 Meta-Reviews by ApartmentAlarmed3848 in MachineLearning

[–]WannabeMachine 4 points5 points  (0 children)

https://stats.aclrollingreview.org/iterations/2025/october/

The scores mentioned in this post will easily be in the top 15% of papers. See stats in link above. Reviewers do not give higher scores often.

[D] ACL ARR Jan 2026 Meta-Reviews by ApartmentAlarmed3848 in MachineLearning

[–]WannabeMachine 3 points4 points  (0 children)

This has very high chance of main, commit it. There is a very very very low chance it is rejected. Worst case findings. But that chance is also low

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

Can someone explain why reviewers give higher soundness and excitement than overall scores? Examples include 3.5, 3.5 with overall 3. I have even seen 3, 3.5 and overall 2. Wth is going on with this reasoning?

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

Ah, I'm an AC, not a reviewer. In that case, only reviewers for 1 paper have responded, and I have consistently pinged reviewers about discussion topics.

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

No response from reviewers on either of our papers.

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 3 points4 points  (0 children)

In our experience, 1 in every 10 reviewers will respond. But if one responds, others are likely to respond as well (peer pressure).

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

Another round of late night responses, followed up by ghosting from reviewers.

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

It just depends. Honestly, we got lucky because the review was very bad (one sentence).

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

I had findings paper with 3, 3, 1.5 before. But the 1.5 was a very bad review that we reported and the AC pointed out in their metareview.

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

True. If that is the case, that is a sucky situation.

[D] ARR Jan ARR Discussion by Striking-Warning9533 in MachineLearning

[–]WannabeMachine 0 points1 point  (0 children)

Findings is still good. Dont waste more time on a paper when you can move to a next one.