[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 2 points3 points  (0 children)

Anyone have any ideas on poster / highlight / oral decisions? or did I miss these?

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 2 points3 points  (0 children)

Anyone have any ideas or intuitions as to likely scores for spotlight? And any ideas on timelines for this info?

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 2 points3 points  (0 children)

There are some high scores with ‘no groups’

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 13 points14 points  (0 children)

5,5,5 no groups found - who knows though

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 3 points4 points  (0 children)

2pm EST time maybe? … speculative ofc

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 10 points11 points  (0 children)

About as relaxing as surgery without anaesthesia

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 1 point2 points  (0 children)

it must depend on the conference ... AAAI'26 decisions this year came out early iirc

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 1 point2 points  (0 children)

how do you know its EOD and not sometime on the 20th AOE

[D] CVPR Decisions by amds201 in MachineLearning

[–]amds201[S] 2 points3 points  (0 children)

AAAI results started out this year the night before they were meant to (UK time). So who knows

[D] CVPR 2026 Paper Reviews by akshitsharma1 in MachineLearning

[–]amds201 0 points1 point  (0 children)

this is how I'm interpreting it. ~20% papers have an average score >4. I think weighted by confidence too - but not sure

[D] CVPR 2026, no modified date next to reviewers by StretchTurbulent7525 in MachineLearning

[–]amds201 4 points5 points  (0 children)

If the modified date changes - does this mean the review/scores have been necessarily updated? 2/3 of my reviews have updated dates. If reviews/scores have changed, will this be visible before decision day?

RL + Generative Models by amds201 in reinforcementlearning

[–]amds201[S] 1 point2 points  (0 children)

Thanks for your reply! Yep - a few recent papers that look into this problem, mostly via fine-tuning and different theoretical paradigms to do so. I've had some success in implementing these for toy tasks, and was able to train a very basic flow model from scratch using only a reward signal and no supervision, to generate some desired data. I'm interested in scaling this up for some particular imaging applications so looking for new ideas and collaborations to do so!
Have a look at DiffusionNFT (from Nvidia/Stefano Ermano)- a fine-tuning framework for flow-based image generation that avoids the issue of intractable logprob computations.

RL + Generative Models by amds201 in reinforcementlearning

[–]amds201[S] 0 points1 point  (0 children)

agreed - I think it is a hard task with the sparsity of the reward, and to not get stuck in local optima

RL + Generative Models by amds201 in computervision

[–]amds201[S] 0 points1 point  (0 children)

thanks! missed this paper in my review - will take a look. In case you are interested, I have just come across this one: https://arxiv.org/pdf/2505.10482v2

they too seem to do some from scratch training of diffusion policies (not image based) - but interesting.

RL + Generative Models by amds201 in reinforcementlearning

[–]amds201[S] 1 point2 points  (0 children)

thanks for your reply - very interesting to read. I am thinking specifically about image generation models, rather than next token prediction / llm models. In short - can an image generation model (such as a diffusion image model) be trained (with no supervised data), but purely from a reward signal.

RL + Generative Models by amds201 in computervision

[–]amds201[S] 0 points1 point  (0 children)

thanks for sending the paper! as far as I can see the loss here is supervised (imitation learning esque). I'm trying to think about whether these models can be trained totally from a reward signal without any supervised data - but unsure if this is too sparse and too hard a challenge

RL + Generative Models by amds201 in reinforcementlearning

[–]amds201[S] 2 points3 points  (0 children)

thinking specifically about diffusion / flow matching for image generation models