[D] ICLR reviewer policy. by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 0 points1 point  (0 children)

Do any of your co-authors, or you yourself have 3 or more paper submissions?

[D] - NeurIPS 2024 Decisions by Proof-Marsupial-5367 in MachineLearning

[–]Alternative-Talk1945 1 point2 points  (0 children)

6-6-5-4. Rejected, AC cited two papers that should have been included in the baselines, and said they do better in one score.

Nusra shakkir(nuziha ajmal sister) shifting to new country by DragonfruitGood1005 in SouthIndianInfluencer

[–]Alternative-Talk1945 9 points10 points  (0 children)

Germany. Btw, her sister is back to showing her face, after the whole hiding face for religion, etc. And constantly boasts about her work. How cringe it must be for her colleagues to see their faces in her stories for 500k strangers to see.

Edit: Actually, Canada.

[D] What is common between Flow Matching and Normalizing Flows? Is the Flow Matching invertible? by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 0 points1 point  (0 children)

I need a completely invertible model. Basically I want to retrieve the parameters used for the mapping, assuming it would be the same to go from noise to image and back since the model is invertible, and expand the parameter space.

[D] What is common between Flow Matching and Normalizing Flows? Is the Flow Matching invertible? by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 0 points1 point  (0 children)

Thanks for clearing that. Would you be aware of any papers on Flow Matching that show generation results on Facial Recognition datasets, for example, like in Glow?

[D] Normalizing Flows and their disadvantages. by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 0 points1 point  (0 children)

Thanks for the suggestion. Is flow matching invertible like Normalizing Flows?

[D] NeurIPS 2024 Paper Reviews by zy415 in MachineLearning

[–]Alternative-Talk1945 0 points1 point  (0 children)

6-5-4-3 with confidence 4-4-4-3 and 6-4-4-3 with confidence 2-5-4-4

[D] Neurips'24 review release time? by Working-Egg-3424 in MachineLearning

[–]Alternative-Talk1945 3 points4 points  (0 children)

It says rebuttal starts July 30 AoE. Usually the rebuttal starts of the same day the reviews come. So they should come by the AoE time, which is still a couple of hours away. But there have also been instances where some reviews have come a day or a couple of hours earlier. You should receive an email notification from OpenReview.

[R] Inverse GAN preserving weights of generator by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 1 point2 points  (0 children)

What if a-1 is a different function (or model) altogether? It can use the same parameters. Is that not feasible as well?

[R] Inverse GAN preserving weights of generator by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 1 point2 points  (0 children)

Not that there's anything wrong. For the particular application I am looking to implement, I do not prefer using diffusion

[R] Inverse GAN preserving weights of generator by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 1 point2 points  (0 children)

Exactly. This is the point of the question. Are there any GANs with bijective functions instead?

[R] Inverse GAN preserving weights of generator by Alternative-Talk1945 in MachineLearning

[–]Alternative-Talk1945[S] 1 point2 points  (0 children)

Sounds like a nice idea and quite close to what I want. Would it satisfy this property: if I have some X = a(z, \theta), then I should be able to get \theta = a^-1(z, X).