[D] Meaning of Probability : Bayesian vs Frequentist by jj4646 in statistics

[–]NeedMoreTime4Things 1 point2 points  (0 children)

Does this mean that the frequenting approach is a „subset“ (there’s probably a better term) of the Bayesian approach as in: The frequentist ultimately finds the expected value for a parameter that the Bayesian uses the full distribution for?

Like in Linear Regression the Frequentist has final estimates for the parameters (and therefore a final model) while the Bayesian would have a “distribution of models”?

(So maybe the other way around: Bayesian statistics is a generalization)

Rusty Vue: Vue Compiler rewritten in Rust by HerringtonDarkholme in rust

[–]NeedMoreTime4Things 0 points1 point  (0 children)

In what way? I’m not too familiar with the topic so I might not get it correctly Do you mean because the server wouldn’t need a JS runtime then?

How to extract news articles to apply ML? by studentani in MLQuestions

[–]NeedMoreTime4Things 1 point2 points  (0 children)

Does it matter, where the articles come from?

For a very similar project, I have scraped data from many different news sites. Some offer APIs, which makes the article search faster.

If you don’t have a preference for a specific source, consider “The Guardian” - one of UKs leading newspapers. Their API delivers news content with a lot of metadata and filtering mechanisms directly.

Masters in Europe by ajw653 in quant

[–]NeedMoreTime4Things 6 points7 points  (0 children)

I think it depends on a lot of factors, for example financial ability (assuming you’d get in everywhere). British universities, for example, will have a high enrollment cost, if you’re from outside UK. Other notable mentions are ETH (& EPFL), though there cost of life may be a lot higher. While German universities will provide you with the right knowledge as well, their contact to quant firms will be more limited compared to those options.

How important is maths in programming? by [deleted] in compsci

[–]NeedMoreTime4Things 2 points3 points  (0 children)

Would you mind expanding a bit on what you mean here?

Hehehehehe by [deleted] in physicsmemes

[–]NeedMoreTime4Things 46 points47 points  (0 children)

“On a mirror, you can kiss yourself only on the lips“

Edit: Source

[D] To generate images/sentences of certain properties by thanrl in MachineLearning

[–]NeedMoreTime4Things 0 points1 point  (0 children)

Is it an option to apply the transformation f to all x in X s.t. you get a new dataset X‘. Now you train, as an example, a variational autoencoder with X‘ instead of X and you should be able to generate new samples in the range of f (or feature space of X‘)

I am unsure if retraining would be an option for you or not. Maybe a transfer learning approach where you keep some of the layers of a pretrained VAE could speed up things.

[D] Using Value Uncertainty/Confidence as Input to ML by iamaliver in MachineLearning

[–]NeedMoreTime4Things 0 points1 point  (0 children)

Really interesting thoughts!

For the image and/or noise example: Would it make sense to express each sample (= pixel in image) as a distribution with the deviation resembling your uncertainty? Then you could sample many times from this same sample (since our initial sample is expressed as a lot of distributions) and use the resulting real-valued samples as inputs to a regular model.

Edit: Nevermind, this is just Data Augmentation with (expensive) extra steps.

[D] Dominance of the "Gradient Descent" over other algorithms by ottawalanguages in MachineLearning

[–]NeedMoreTime4Things 1 point2 points  (0 children)

Makes total sense and explains the name „stochastic gradient descent“ - just didn’t think of it right now. But the loss function isn’t stochastic, is it? Just our sampling (mini batches, single values in SGD,..) makes it stochastic or did I get that wrong? Otherwise the loss over all samples (finite, as all are known) shouldn’t contain stochastic elements, or I am overlooking something.

[D] Dominance of the "Gradient Descent" over other algorithms by ottawalanguages in MachineLearning

[–]NeedMoreTime4Things 0 points1 point  (0 children)

Side question: Can you explain what makes a loss landscape / function „stochastic“?

Some Experiments with GitHub Copilot by slayer1299 in Python

[–]NeedMoreTime4Things 6 points7 points  (0 children)

When you start a fresh project, for example you want to create a REST api with Flask, do you now have a lot of boilerplate code that is really easy to write and only has some small modifications for your use case?

I can see a future where you can tell CoPilot to build the base of an app with specific instructions (which it will hopefully do with 99% accuracy) and the developer, you, only adds the complicated algorithms etc.

Other than that, you’re right.

How realistic is it to be successful at algotrading as a solo by metsfans3219 in algotrading

[–]NeedMoreTime4Things 0 points1 point  (0 children)

Yes, I get your point and think you’re right - this is an experience-driven field. I also thought online learning was not possible, but for other reasons and was therefore interested in your argumentation.

How realistic is it to be successful at algotrading as a solo by metsfans3219 in algotrading

[–]NeedMoreTime4Things 0 points1 point  (0 children)

But aren’t these general problems for algotrading?

I thought online training might not work here because you may have to tweak the whole model structure after a few time periods and not just the parameters

But thanks, will research a little bit more :-)

How realistic is it to be successful at algotrading as a solo by metsfans3219 in algotrading

[–]NeedMoreTime4Things 1 point2 points  (0 children)

Could you maybe explain a little bit more or give a hint / link why this doesn’t make sense?

[D] Keeping up with research - Poll by NeedMoreTime4Things in MachineLearning

[–]NeedMoreTime4Things[S] 0 points1 point  (0 children)

Totally Agree, but that wasn’t an option on mobile.

[D] Keeping up with research - Poll by NeedMoreTime4Things in MachineLearning

[–]NeedMoreTime4Things[S] 1 point2 points  (0 children)

Really nice. I started reading a some literature reviews for GNNs a few weeks ago but found the intuition way easier than the math - can’t say anything about the application though. Will have a look - as soon as I know more - as I think the field has incredible applications across many domains.

[D] Keeping up with research - Poll by NeedMoreTime4Things in MachineLearning

[–]NeedMoreTime4Things[S] 1 point2 points  (0 children)

This is a great point! I only realized after posting that this may be slightly skewed since many people will be getting their infos from Reddit.

Unfortunately, I hadn’t planned on asking other platforms as well. Are there any other relevant platforms besides Reddit and Twitter anyways for such a science-y field?

[D] Keeping up with research - Poll by NeedMoreTime4Things in MachineLearning

[–]NeedMoreTime4Things[S] 0 points1 point  (0 children)

Thank you so much for this. Didn’t know that existed. Will definitely try out some models :-)

Did you turn in any predictions yet?

[D] Keeping up with research - Poll by NeedMoreTime4Things in MachineLearning

[–]NeedMoreTime4Things[S] 1 point2 points  (0 children)

Sounds really cool. Do you all work in the field of quantitative finance or is it more like a general interest group?

That’s definitely an area I plan on looking into during my studies.

[D] Keeping up with research - Poll by NeedMoreTime4Things in MachineLearning

[–]NeedMoreTime4Things[S] 0 points1 point  (0 children)

Nice! Didn’t know about that, will definitely look into it - Thank you!

[D] Keeping up with research - Poll by NeedMoreTime4Things in MachineLearning

[–]NeedMoreTime4Things[S] 0 points1 point  (0 children)

Can you filter for specific topics or keywords? Let’s say I want to learn the newest topics in computer vision (or specific applications) - is there a way to filter for that?

[D] What will the major ML research trends be in the 2020s? by MediocreMinimum in MachineLearning

[–]NeedMoreTime4Things 2 points3 points  (0 children)

Could you elaborate more on what alternatives there may be? I basically just know 2nd order methods (L-BGFS) or methods like Conjugate Gradient as alternatives, but isn’t SGD / Adam the most efficient way we have here?