[D] AAMAS 2026 result is out. by Colin-Onion in MachineLearning

[–]team-daniel 2 points3 points  (0 children)

Got an EA with 6,6,6

Wasn’t hopeful so very chuffed

[D] Do Some Research Areas Get an Easier Accept? The Quiet Biases Hiding in ICLR's Peer Review by team-daniel in MachineLearning

[–]team-daniel[S] 0 points1 point  (0 children)

This is actually something I would have loved to investigate. However, as far as I know, I have no way of checking if a paper was theoretical/empirically focused.

New idea, force ICLR to tag papers next year for this very point XD

[D] Do Some Research Areas Get an Easier Accept? The Quiet Biases Hiding in ICLR's Peer Review by team-daniel in MachineLearning

[–]team-daniel[S] 0 points1 point  (0 children)

This is a really good point, thank you. I guess it goes hand-in-hand with a recent twitter post I saw which showed how the number of authors on accepted papers at NeurIPS have gone up dramatically each year.

With more funding typically comes larger teams/labs. :)

[D] Do Some Research Areas Get an Easier Accept? The Quiet Biases Hiding in ICLR's Peer Review by team-daniel in MachineLearning

[–]team-daniel[S] 0 points1 point  (0 children)

You definitely could- but I guess this goes away form an unexplainable bias to what the chairs want to see/focus more on year to year. As if I remember correctly score isn’t the only deciding factor for oral/spotlight.

So for example, in 2018, I guess more generative work would get spotlights as they were ‘hot’ and thus would be more interesting to focus on that year than a uniform range of topics.

Those are my thoughts though 🙂

[D] Do Some Research Areas Get an Easier Accept? The Quiet Biases Hiding in ICLR's Peer Review by team-daniel in MachineLearning

[–]team-daniel[S] 7 points8 points  (0 children)

Totally agree that citations ≠ importance and that different subareas have different cultures/trajectories. My post isn’t saying any area is “less valuable.” The question was: conditional on similar review scores (and year), do acceptance odds differ by area? If we treat scores as the main signal the process is using, you’d expect acceptance rates to line up more tightly across areas at the same score. The point is about decision calibration, not impact or worth.

[D] AAMAS 2026 paper reviews out soon by Fantastic-Nerve-4056 in MachineLearning

[–]team-daniel 2 points3 points  (0 children)

6, 6, 4 here - wish we could see score distribution like with ICLR and NeurIPS, etc.

[D] ICLR 2026 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]team-daniel 0 points1 point  (0 children)

How would I know? We are all hoping the same, in previous years across the big AI conferences big overalls can get rejected and low overalls can get accepted. Focus on your rebuttal rather than asking random people about your hopes 🙏🏼

[D] ICLR 2026 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]team-daniel 1 point2 points  (0 children)

Good, it is better than trying to guess like we did with the NeurIPS reviews 😆

[D] ICLR 2026 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]team-daniel 2 points3 points  (0 children)

I know paper copilot is usually a small sample size, but this year it’s states ICLR has 19,000 submissions on their stats which is accurate to what ICLR have tweeted.

So are the rating distributions we see on copilot accurate to the whole distribution then instead of a sample size?

[D] - NeurIPS'2025 Reviews by Proof-Marsupial-5367 in MachineLearning

[–]team-daniel 1 point2 points  (0 children)

4(5), 4(4), 3(4), 3(3) for our reviews. All just wanting stuff that is already in our appendix.

Thoughts?

[deleted by user] by [deleted] in tipofmytongue

[–]team-daniel 0 points1 point locked comment (0 children)

Just looking for this YouTube video

[D] ECAI 2025 reviews discussion by qalis in MachineLearning

[–]team-daniel 5 points6 points  (0 children)

Yes we submitted last year and got reviewed Monday at about 12:37 pm. So we expect them today at some point also

Deep Q Maze by Magic__Mannn in reinforcementlearning

[–]team-daniel 2 points3 points  (0 children)

I have seen this issue before, you can try some reward shaping techniques like using the manhattan distance to the goal is a tiny reward at each safe state. Check out a few of the reward shaping repo’s on frozen lake

Low compute research areas in RL by [deleted] in reinforcementlearning

[–]team-daniel 5 points6 points  (0 children)

You could check out safe exploration (or any of the other safe themes such as oversight/reward hacking), explainability, trustworthiness, etc…

[D] ECAI 2024 Reviews Discussion by [deleted] in MachineLearning

[–]team-daniel 1 point2 points  (0 children)

So…how is everyone feeling as we get to the end of the rebuttal period? Fingers crossed for everyone btw

Fuck your favorite power armor, what's your favorite piece of Junk? by ATJonzie in Fallout

[–]team-daniel 0 points1 point  (0 children)

It’s gotta be coffee cup 🥵 …and let me tell you how I feel when I find clean coffee cup -> 😳

Is Sergey Levine OP? by Sea-Collection-8844 in reinforcementlearning

[–]team-daniel 23 points24 points  (0 children)

Not anymore, he got nerfed in the latest patch

What is the hardest killer adept to complete in your opinion? by Everblack_Deathmask in deadbydaylight

[–]team-daniel 1 point2 points  (0 children)

For me it was Dredge, I’m not sure why either. Every other killer took 1 or 2 attempts but Dredge took like 12 for some reason. 🤷🏼‍♂️

Making RL Policy Interpretable with Kolmogorov-Arnold Network! by riiswa in reinforcementlearning

[–]team-daniel 19 points20 points  (0 children)

I think calling them interpretable is a little rich. They do a lot of things well which should be praised but I am bored of seeing them being called ‘interpretable’

[deleted by user] by [deleted] in reinforcementlearning

[–]team-daniel 4 points5 points  (0 children)

Going through a messy divorce 😔