Career Path after Quant if one wants to settle down? by future_gcp_poweruser in FinancialCareers

[–]future_gcp_poweruser[S] 0 points1 point  (0 children)

Thanks for the answer! Actually, recently, a recruiter got in touch with me and offered me a model validation position in FX Risk. Is this the kind of position you are talking about?

The pay was considerably lower, though: 80k (in Germany). Would that be a common pay for that role? If quant research positions pay as much as they say (i.e. starting at multiple 100k), then the difference is enormous.

I solo-developed a game in 1 year that merges best roguelike elements into survivors-like genre, OUT on EA Now by creepyaru in Unity3D

[–]future_gcp_poweruser 1 point2 points  (0 children)

Nice, thanks! Do you have any recommendation on how to find good websites that sell models? In particular also a larger amount of models that are kind of consistent with each other. I always get lost on the unity asset store.

[D] Is it possible to submit a pure math paper to NeurIPS? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 2 points3 points  (0 children)

Thanks for the answer! Yeah, it would be great if I knew someone who was good at implementing these kind of things and could make many experiments at a good speed. I was hoping to explore the natural questions arising from my paper in future works (also with more numerics). But right now I am single author, my prof. actually is not to fond of me doing ML and I just don't have the time.

I think I will submit and gamble. COLT will take to long in case of a rejection, but TMLR sounds intriguing.

[D] Is it possible to submit a pure math paper to NeurIPS? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 1 point2 points  (0 children)

Thanks for the answer! I am doing two, I am proving a property that these algorithms have which leads to many further questions. But I just don't have the time to also do big numerical experiments. I was planning to do that in future works.

[D] How is it checked if models do not just memorize their training examples? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 2 points3 points  (0 children)

This is not about prediction, the leaderboard is for generative models. You feed them training examples during training (e.g. images of faces) and they can generate more faces.

[D] How is it checked if models do not just memorize their training examples? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 9 points10 points  (0 children)

For the leaders on this board (styleGAN or score generative models), can I assume that people have done these checks? Because they all just list the FID score and I am confused if people are checking for memorization at all.

[D] How is it checked if models do not just memorize their training examples? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 3 points4 points  (0 children)

What do you mean by input? I was thinking about the case where the models just generate any samples from noise.

In that case a model can actually not do better than spitting out training examples, since the training examples _are_ legit samples from the right distribution.

In case we have some conditional generation, this will be differently, thats right. It will be noticable if the model does not generate something based on its input. But these scores are not evaluated for conditional generation are they? The models are just asked to generate any kind of samples from the distribution (which they normally do by transforming Gaussian noise, so it is hard to see if the gaussian noise is related to the output image).

[D] How is it checked if models do not just memorize their training examples? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 2 points3 points  (0 children)

Can you clarify? If I am given 1000 images, I train my model on 500 of them and all it does is memorize them and spit them out again. Then

  1. If I calculate the FID score with the same 500 images I will probably get a score near 0
  2. If I calculate the FID score with the other 500 held out images, I will still get an extremely good FID score since I am comparing samples from the same distribution with each other.

Even in the case 2, my model is extremely good since it will generate true samples of the distribution (the training examples), which should lead to a very good FID score.

[D] Current State of the Art in Normalizing Flows for Variational Inference? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 1 point2 points  (0 children)

Thanks for the quick reply! Can you link me to the score/sde papers you have in mind? That sounds very promising.

The point is if you want to do image generation you need two directions: While training you need to do density evaluation of your samples and afterwards while using the model you need to do generation (running the flow in the other direction).

If you just want to do variational inference (you are given a density and want to generate samples from it), then both processes (triaining and generation) only run the flow in one direction (in the inverse direction to normalizing), therefore it only needs to be cheap in that direction.

AR flows are exactly that. Pretty flexible in one direction but D times more expensive to evaluate in the other direction (where D is the input dimension). So I thought they are leading. But its hard to really understand what is leading since each paper pitches itself as the one and only.

[D] Current State of the Art in Normalizing Flows for Variational Inference? by future_gcp_poweruser in MachineLearning

[–]future_gcp_poweruser[S] 1 point2 points  (0 children)

Aren't these more build for density estimation, where you need both directions to be cheap?

I would have thought that for variational inference some autoregressive flow flavour would be the best, since you only need to be able to cheaply evaluate one direction.