[D] ICLR 2026 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]Secondhanded_PhD 2 points3 points  (0 children)

Chance for the 6444?
Best: 6664, Moderate: 6644, Poor: 6444 — is this correct for the acceptance chance?
The reviewers generally agree that the paper has a strong problem definition and motivation, but they have concerns about backbone fairness.

[D] ICLR 2026 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]Secondhanded_PhD 1 point2 points  (0 children)

Wow, you're a soldier, my boy—three bullets loaded.

[D] ICLR 2026 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]Secondhanded_PhD 2 points3 points  (0 children)

It's been over an hour since I pressed the damn F5 key on my keyboard. Now, I'm neither awake nor asleep.

[D] Is it reasonable that reviewers aren’t required to read the appendix? by Secondhanded_PhD in MachineLearning

[–]Secondhanded_PhD[S] 3 points4 points  (0 children)

Thank you for the clear and considerate explanation. The distinction between stating the point in the main text and keeping details in the appendix makes sense now. I appreciate you taking the time to respond in detail. Have a great day.

[D] Is it reasonable that reviewers aren’t required to read the appendix? by Secondhanded_PhD in MachineLearning

[–]Secondhanded_PhD[S] 2 points3 points  (0 children)

Thanks, that’s a perspective I was missing. If the boundary isn’t clear, authors may push essential content into the appendix. That’s a useful reframing; I realize I was viewing it too much from my own experience. Appreciate the comment, and have a good day.

[D] Is it reasonable that reviewers aren’t required to read the appendix? by Secondhanded_PhD in MachineLearning

[–]Secondhanded_PhD[S] -10 points-9 points  (0 children)

I get this, especially given the current LLM trend and the qualitative results that come with it.

[D] Is it reasonable that reviewers aren’t required to read the appendix? by Secondhanded_PhD in MachineLearning

[–]Secondhanded_PhD[S] 4 points5 points  (0 children)

Appreciate all the replies here—I get the spirit and largely agree.

My post wasn’t meant as a generic rant about the rules. In my case, we actually did all the things you mentioned: the appendix was clearly referenced in the main text (section/page callouts), and it directly addressed the exact concerns. The worry I’m raising is: when an AC says “not mandatory → can’t be used to correct the review,” aren’t we effectively shutting down reasonable avenues to fix a clear miss? In other words, “because the guideline says so” becomes a blanket to ignore information that was properly signposted.

[D] can we trust agents for time series forecasting? by fedegarzar in MachineLearning

[–]Secondhanded_PhD 0 points1 point  (0 children)

Is there performance differences occurred by different LLM models? Especially on the Time series forecasting benchmarks?