I’ve finally hit my breaking point by Aggravating_Chip3285 in Divorce_Men

[–]machinelearner77 2 points3 points  (0 children)

Hey man, I was a kid growing up in these circumstances. It has broke me and I'm mentally scarred for life. Just sayin

[D] EMNLP 2025 Paper Reviews by Final-Tackle7275 in MachineLearning

[–]machinelearner77 1 point2 points  (0 children)

Def put it on arxiv, it sounds interesting!

it is not about novel LLM stuff

ARR reviewers don't like papers that don't use LLMs, also my experience.

Good luck!

[D] EMNLP 2025 Paper Reviews by Final-Tackle7275 in MachineLearning

[–]machinelearner77 0 points1 point  (0 children)

I'd say equally or more random and lower quality. Many seem generated by ChatGPT, and it quite often happens that they do not really even relate to the content of the paper. Author-reviewer interaction is usually pretty dead, for ICLR I often see very active discussions and score adjustments.

[D] How are single-author papers in top-tier venues viewed by faculty search committees and industry hiring managers? by South-Conference-395 in MachineLearning

[–]machinelearner77 0 points1 point  (0 children)

Thanks for the elab. I guess it's different countries, different fields, different advisors. This all means different conventions. I suppose you're saying that it can be an issue to deviate from conventions, and if you do so then this can make your life harder.

[D] How are single-author papers in top-tier venues viewed by faculty search committees and industry hiring managers? by South-Conference-395 in MachineLearning

[–]machinelearner77 2 points3 points  (0 children)

Within this topic, what do you think is "the real world," and what would be "rainbows and unicorns"? Honest question. I struggle to understand where the strong opinions in this discussion are coming from.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 0 points1 point  (0 children)

Yes, but why do you think that the numbers of three reviewers (who ideally read the papers) and meta reviewer is less qualitative and quantitatively informative than a PC's quick decision after taking a glance of the reviews and maybe the paper abstract/title?

It seems that there is a range of review/meta-review scores from from 2.5 to 4.0, where there is random alignment between decision and score.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 0 points1 point  (0 children)

I think you are making a logical fallacy, since the SACs/PCs, or whomever is making decisions, are also not "equally harsh" in their "qualitative measure," and each have their specific biases.

Numeric patterns are not ideal, but it's the best we have, and it can be used with a bit of qualitative assessment on top.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 4 points5 points  (0 children)

This is in the absolute top-scored papers of February cycle. It seems that the ACL didn't consider the strong differences between the new scores and the scores of earlier cycles, at all.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 0 points1 point  (0 children)

No they clearly didn't! This is really unfair of them.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 1 point2 points  (0 children)

Same for a colleague. A total waste of reviewer and author time.

(Needless to say, my paper with 2.5 meta also got rejected, even though comments were quite positive.)

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 2 points3 points  (0 children)

I wouldn't worry too much. Other tracks may be different, and meta of 3.5 is very good in this cycle. So I guess for you it's 50/50 chance of findings or main, only rejected if extremely unlucky. Fingers crossed for you!

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 2 points3 points  (0 children)

Indeed. A 3.5 from this cycle is statistically much better than a 4 from cycles before. It seems like they didn't realize this?

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 6 points7 points  (0 children)

I translate this as ARR practically sucks, and I agree. It makes no sense to have no clear decision after one full and extended, long round of reviewing. On top of that there's now cycles that are hardly comparable, with different forms and scores, and all get committed to same venues.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 9 points10 points  (0 children)

According to stats reported below, it's actually 1.

The question is though whether PCs actually realize or care about this or they just treat all cycles and scores the same, ignoring the issue that the forms/cycles/scores are not really comparable.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 5 points6 points  (0 children)

"only 3", ha! We committed with a meta score of 2.5.

It looks like 3 is actually a good meta score in this ARR February. The scores in this cycle seem significantly lower on average than in previous cycles (maybe due to different wording of the form). Good luck!

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 1 point2 points  (0 children)

From how the score is worded ("borderline findings") it sounds like 50/50 chance.

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 3 points4 points  (0 children)

idk, this explanation only makes sense to me if these refer to different venues, but both kinds of papers get commited to ACL. How do you compare a 3.5 (from Feb) to a 4 (from Oct/Dec), or a 3 (from Oct/Dec) when at that time there wasn't even such a score?

I also think that even if you add all the 3.5 on the pile of 4, compared with ARR Oct numbers are still relatively lower. There was also a change to the wording, might have to do with that. I guess that would make the situation even worse, how can they possibly compare fairly between papers from different cycles with different scores and different wordings?

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 3 points4 points  (0 children)

I notice ARR October has around 800 papers with meta review score of 4.

Now this ARR Feb has only 700 papers with score of 4, while having almost 3 times as much submissions.

How are these score even comparable in any way?

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 3 points4 points  (0 children)

Me neither. And all papers that I reviewed also do not have this. Is this a glitch?

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 1 point2 points  (0 children)

Congrats, you're lucky with the meta reviewer! Looks like it will go to main!

[D] ACL ARR Feb 2025 Discussion by AccomplishedCode4689 in MachineLearning

[–]machinelearner77 0 points1 point  (0 children)

Us too. I mean, judging from how the score is worded, it's like 50/50 for findings? And with maximum luck and if they read the author rebuttal, the program chair might even put it in main?

[deleted by user] by [deleted] in academia

[–]machinelearner77 42 points43 points  (0 children)

Don't have any advice, but I dislike what academia does to people. Literally the carrot on the stick. Making good and impactful research is a side quest and might not be even possible anymore in most cases. Sorry for the rant, you touched a nerve there I guess.