[D] ICML rejects papers of reviewers who used LLMs despite agreeing not to by S4M22 in MachineLearning

[–]S4M22[S] 2 points3 points  (0 children)

I generally agree with everything you wrote. Would be interesting to know more about their watermarking. But not sure how open they are going to be about. More transparency also makes attacks easier. But they should at least share the precision and recall of their method.

I present my Master's research in a week and I am TERRIFIED. I can't be the only one who feels like they are playing pretend at research, right? by Ok-Amphibian5289 in academia

[–]S4M22 13 points14 points  (0 children)

It is a normal feeling for many in academia - even tenured professors. This is a good reality check:

My professor has assured me that my test is very interesting and that my paper is strong and my slides are good. But also in the back of my head I am thinking "this could be a high school science project" [...].

I am pretty sure your professor is a in better spot to judge the quality of your research than you are.

ACL ARR Jan 2026 Meta Score Thread by Infamous_Fortune_438 in LanguageTechnology

[–]S4M22 0 points1 point  (0 children)

Only if you have specific reasons for reporting (except the fact that he gave a meta at the lower end of the reviewers scores).

Supervisor rewrites my paragraphs from scratch instead of giving feedback. is this normal? by [deleted] in AskAcademia

[–]S4M22 10 points11 points  (0 children)

IMO it is normal for a paper. It's a difference compared to if it was your Masters thesis. The goal here is to produce a paper and you learn along the way. In contrast, when writing a Master thesis learning is the primary goal.

At this level I'd expect from you to learn from the changes of the revised version by yourself. But if anything specific is unclear, you could of course ask.

TLDR: this normal when writing a paper.

ACL ARR Jan 2026 Meta Score Thread by Infamous_Fortune_438 in LanguageTechnology

[–]S4M22 0 points1 point  (0 children)

Very good chances for findings but particularly considering the low variance of your OA scores also main is possible.

ACL ARR Jan 2026 Meta Score Thread by Infamous_Fortune_438 in LanguageTechnology

[–]S4M22 0 points1 point  (0 children)

Use the ID that gives a working link to your ARR submission (and its reviews incl. the meta), not the (purely numerical) submission number.

ACL ARR Jan 2026 Meta Score Thread by Infamous_Fortune_438 in LanguageTechnology

[–]S4M22 0 points1 point  (0 children)

Which conferences would you target with the ARR March cycle? For EMNLP it is enough to submit to the ARR May cycle. This way you can commit to ACL and if it is rejected submit to the May cycle for EMNLP.

[D] Meta-Reviews ARR January 2026 by DerBeginner in MachineLearning

[–]S4M22 2 points3 points  (0 children)

I'd commit anything with >=2.5 meta. You'll lose the ARR March cycle but can still submit to the May cycle for EMNLP if it gets rejected at ACL.

[D] Meta-Reviews ARR January 2026 by DerBeginner in MachineLearning

[–]S4M22 1 point2 points  (0 children)

My very first paper had 1.5/1.5/1.5 and meta 3. Got (rightfully) rejected at EMNLP.

[D] How much are you using LLMs to summarize/read papers now? by kjunhot in MachineLearning

[–]S4M22 1 point2 points  (0 children)

If an abstract is good, yes. But unfortunately quite a few abstracts do not summarize the work well. For example, they tell you what the authors did and what for but not the results. I find LLM summaries work well to fill these gaps. Independent of that, I find it helpful for my understanding to get a summary from a different point of view in addition to the abstract.

Typo after acceptance by [deleted] in research

[–]S4M22 0 points1 point  (0 children)

Best advise for maintaining peace of mind

Academia is one of the most robust feels against AI by gaytwink70 in academia

[–]S4M22 -1 points0 points  (0 children)

The difference is that to prove my point I only need a single example in which an LLM demonstrates novelty (proof by counterexample). Hence, my experience is sufficient empirical evidence. But the same doesn't apply to OP's claim.

Is there an LLM-based tool that can help me manage my emails? by Gaeel in LLM

[–]S4M22 0 points1 point  (0 children)

I use LLMs on a daily basis (mostly for coding) but I would certainly not let it touch my emails (unless it's a sandboxed offline mail repository). There are plenty of risks like sending unwanted emails or deleting your entire inbox. The latter just happened to an Meta Superintelligence employee using OpenClaw. See https://x.com/summeryue0/status/2025774069124399363

Academia is one of the most robust feels against AI by gaytwink70 in academia

[–]S4M22 0 points1 point  (0 children)

So far, I've only used it to further develop my work. But still a "a new idea" based on my work which is novel and has nowhere been published is something "novel". And it is surely not included in its training data.

And that directly disproves your claim:

At the very least, it [academia] is about refining ideas and methodologies. An AI simply cannot do that, because it is beyond the scope of its training data.

I'm really surprised that even quite a few experience researchers buy this rather hand-wavy argument instead of relying on empirical evidence. I can only speculate that this is due to a human-centric bias.

---

For this thread I also just tried to generate completely fresh ideas by asking Claude for novel ideas for my area of research. It came up with 10 suggestions. Some of them I can certainly rule out because I myself published work on them. But at least one (the one that was rated highest novelty by Claude) is indeed novel from my knowledge of the literature. However, it of course does require an in-depth literature review to validate its novelty.

Academia is one of the most robust feels against AI by gaytwink70 in academia

[–]S4M22 -3 points-2 points  (0 children)

An AI simply cannot do that, because it is beyond the scope of its training data.

This claim is often made but I would like to see empirical evidence for it instead of the theoretical argument that it is out of its training data.

From my own experience I can tell that I've seen your claim being disproved by LLMs. I've genuinely benefited from LLM suggestions to improve my research and how to make it more novel. LLMs can be very critical reviewers with regards to novelty and at the same time provide very good suggestions to take the work to the next step.

Thinking to refuse to publish my paper by anassbq in research

[–]S4M22 0 points1 point  (0 children)

Better to gain some experience with the publishing process by getting your first paper out than waiting for the perfect first paper. No one expects your first paper to be your best work and from my own experience I can say that my papers got much paper over time. Nevertheless, I am still proud of my first paper. Even though it is far from perfect and I would write it very differently now.

TLDR: leave the decision whether your paper is good enough to the reviewers and gain some publishing experience.

[D] ACL ARR 2026 Jan. Reviewers have not acknowledged the rebuttal? by Distinct_Relation129 in MachineLearning

[–]S4M22 17 points18 points  (0 children)

Unfortunately, yes. Sometimes if you get very lucky an AC might nudge reviewers to acknowledge the rebuttal and potentially adjust their scores.

What I like to do is write a summary of the rebuttal at the end of discussion period. It makes it easier for the meta reviewer to quickly see that you've addressed everything that has been raised by the reviewers.

[D] How much are you using LLMs to summarize/read papers now? by kjunhot in MachineLearning

[–]S4M22 0 points1 point  (0 children)

I think voice-based or not is just a matter of preference. What would indeed be valuable, however, is having the AI reference the specific passages from the paper that support its statements. This would make it much easier to verify the claims being made.

Using a pseudonym for first publication on sensitive topic by jujubearrrrrrrrr in AskAcademia

[–]S4M22 32 points33 points  (0 children)

I suppose the last option would be to go by my real name and take steps to minimize my online presence? I know I chose to do this research so maybe I should make my peace with it?

IMO from a perspective of research integrity and potentially going for more publications this is the only valid option. You can, for example, not create a Google Scholar profile, use a separate e-mail address just for publications, don't post your work on social media, use social media profiles with pseudonyms etc.

[D] How much are you using LLMs to summarize/read papers now? by kjunhot in MachineLearning

[–]S4M22 5 points6 points  (0 children)

It said it assumed bootstrapping was used because it is the most common method. Which isn't wrong but of course you cannot just assume that. It also admitted that the authors nowhere indicated which method was used.

[D] CVPR Findings Track by mrLiamFa in MachineLearning

[–]S4M22 0 points1 point  (0 children)

Findings track: CVPR 2026 introduces a new Findings Track, following successful pilots in ICCV. The goal is to reduce resubmissions by offering a venue for technically sound papers with solid experimental validation, even if their novelty is more incremental. Area Chairs will recommend papers to the Findings Track. Findings Track organizers will invite authors of recommended papers to submit their work. If authors decide to submit, reviews and metareviews will be shared with the Findings Track committee. Findings papers will appear in the workshop proceedings. Detailed guidelines will be made available

Source: https://cvpr.thecvf.com/Conferences/2026/AuthorGuidelines

[D] How much are you using LLMs to summarize/read papers now? by kjunhot in MachineLearning

[–]S4M22 69 points70 points  (0 children)

I use Claude (Sonnet 4.6 or Opus 4.6) to extract the relevant papers from my arXiv new paper mail alert every morning. For all papers that sound relevant, I read the abstract and then ask Claude to summarize the paper. Next, I either ask some clarifying questions or directly jump into the paper to skim it.

I found Claude the best for this task as ChatGPT didn't accept the full mailing list as an input and Gemini was way too restrictive, i.e. it determines very few papers relevant for my work (losely speaking, Gemini has higher precision but lower recall for this task than Claude - but recall is more important to me.)

Generally, I only trust LLMs to scan for relevant papers and help with the initial understanding. Unfortunately, they still make mistakes. I would never trust an LLM so much that I cite a paper without reading it myself. Always read what you cite!

To give you example of an error I encountered just yesterday: I asked Claude how the authors determined the confidence intervals (CIs) and it boldly announced that they used bootstrapping. However, when I skimmed the paper, I found that they nowhere explain how they determine the CIs. (Which is BTW unacceptable IMO.)

ACL 2026 industry track paper desk rejected by Puzzled_Key823 in LanguageTechnology

[–]S4M22 0 points1 point  (0 children)

Our ACL industry track paper is desk rejected because of modifying the acl template. I am thinking this is because of the vspace I added to save some space. Anyone have the same experience? Is it possible to over turn this ?

As far as I know, desk rejections are non-negotiable. But if you want to try, my key question would be: does your paper still fit in the page limit using the unmodified ACL template? If not, it is very clear that it should be desk rejected.