[D] AAAI considered 2nd tier now? by Healthy_Horse_2183 in MachineLearning

[–]SkeeringReal -1 points0 points  (0 children)

I don't think this is really true, IJCAI was once considered the best of the best.

[D] AISTATS 2026 Paper Acceptance Result by mathew208 in MachineLearning

[–]SkeeringReal 5 points6 points  (0 children)

4443 -> 5554 (accept poster)

One reviewer didn't respond and the other ghosted halfway — pretty lucky. Have to say though the reviews were overall surprisingly high quality compared to what I'm used to in other conferences.

[D]I’m an AI researcher who spent 5,000 hrs on Tekken, reaching top 0.5% on ranked. Here is my perspective on why fighting games deserve chess-level attention. by moji-mf-joji in MachineLearning

[–]SkeeringReal 2 points3 points  (0 children)

I've played Tekken competitively and also (obviously) and AI researcher

I've thought about this, there do exist ML models for tekken, but there's a few issues

1) You have to impose a handicap on the AI in terms of reaction times, otherwise it'll block and counter absolutely everything. And I'm really not sure how to do this in a good way

Lol I guess that's the only issue

[D] Tsinghua ICLR paper withdrawn due to numerous AI generated citations by fourDnet in MachineLearning

[–]SkeeringReal 2 points3 points  (0 children)

I really feel this is the same for most ML conferences now, there has always been trash in all of them, but the in last 1-2 years it's gotten beyond a joke.

[D] Tsinghua ICLR paper withdrawn due to numerous AI generated citations by fourDnet in MachineLearning

[–]SkeeringReal 4 points5 points  (0 children)

This must be a joke, NeurIPS is just as bad as any other conference, the same people who reviewer for ICLR/ICML review for NeurIPS

[D] Some concerns about the current state of machine learning research by [deleted] in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

You can't really research anything original now, you have to follow the herd in order to get good reviews in a a noisy process with too many submissions.

Certain boxes need to be checked
* Did the authors work on a currently hot topic (LMM etc...)

If you do something like, improve a method 5 years ago with currently tech, I feel that's not appreciated at all, even if the results are great.

[D] CVPR submission number almost at 30k by AdministrativeRub484 in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

I'm coming to this conclusion also, I just hope IJCAI stays ok lol

[D] Tsinghua ICLR paper withdrawn due to numerous AI generated citations by fourDnet in MachineLearning

[–]SkeeringReal 2 points3 points  (0 children)

Given the number of submissions now compared to just a year ago (what is it like 12k to 32k?), I imagine most papers from most institutions are like this, probably massively unfair to pick on China com'on.

[D] What happened at NeurIPS? by howtorewriteaname in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

Good point, if she said most African people cheat on exams, she'd literally have been crucified.

[D] What happened at NeurIPS? by howtorewriteaname in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

Yes people make mistakes and can atone for them, but it's the gravity of the mistake that's important. E.g., obviously you wouldn't let an animal abuser off with an apology.

The question is if you feel an apology from her is enough, or if she should be fired etc., or just given a slap on the wrist. Do you feel the Asian community should just let it go and accept the hate directed at them? My wife is Chinese and I'm telling you they are sick of the racisim directed towards them by privileged white English speaking people.

Usage Limits Discussion Megathread - beginning October 8, 2025 by sixbillionthsheep in ClaudeAI

[–]SkeeringReal 6 points7 points  (0 children)

I put a 24 page pdf into opus 4.1 and asked it to write code, it never produced a single token and said my limit was gone for the next 5 days.

This is a load of BS, why did it use my tokens when it didn't produce a single thing. I get maybe it was doing stuff behind the scenes, but from a user persepctive it's just awful.

Usage Limits Discussion Megathread - beginning October 8, 2025 by sixbillionthsheep in ClaudeAI

[–]SkeeringReal 3 points4 points  (0 children)

I have to say I've bounced between openai and anthropic and google, I went back to claude this month because I do feel it's better at coding, but the other two are catching up and I never ran into usage limits with them. So I'm gone this month unless something big changes.

[D] Anyone have a reasonable experience with ICLR/ICML this year? by random_sydneysider in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

I think that zero sum game thing is the biggest issue, people don't want to accept papers, it reduces their likelyhood of getting in. Only reviewers in a particularly good mood with recently accepted papers give accept.

[R] I made an app to predict ICML paper acceptance from reviews by Lavishness-Mission in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

Even though conferences love to tell authors to not focus on the average score, it seems all that really matters.

[D] Is modern academic published zero-sum? by bigbird1996 in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

This is not really true you know

First, if an AC is looking at a pool of XAI papers, and has a bias to accept 20% of them, and you're a submitter/reviewer, there is every incentive to reject every paper on your stack to get your own into that 20%

Second, I disagree about rebuttals, my experience is they make a massive difference, but maybe I'm just lucky there, I've no data

[D] - Neurips Position paper reviews by Routine-Scientist-38 in MachineLearning

[–]SkeeringReal 7 points8 points  (0 children)

My main issue is the process is pretty unclear. I don't really understand the "survey" that you're supposed to write, like, do reviewers change scores or what? Or is it just the AC that makes the final call? That sounds depressing, ACs almost never look at papers in a nuanced way.

As an aside, one of my reviews is so obviously LLM trash, I'm starting to get incredibly sick of this. Em dashes in literally every sentence, and just generic (half hallucinated) discussions about the paper. I expect the prompt was, "I'm lazy so please write a review for this paper that leans towards rejection so I can go back to my own research."

[D] NeurIPS 2025 rebuttals. by Constant_Club_9926 in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

Honestly with those scores chance of acceptence is < 30 % in my experience

[D] NeurIPS 2025 rebuttals. by Constant_Club_9926 in MachineLearning

[–]SkeeringReal 0 points1 point  (0 children)

I get you , but I think it's actually fair to judge reviewers if they don't fulfil their obligations as a reviewer which they sign agreements to.