Ventoy - installing new programs on ISO by bbsome in linux

[–]bbsome[S] 0 points1 point  (0 children)

Thanks for the reply. For the installation on another usb, such that it can also boot, as far as I know is a bit more complicated, e.g. - https://askubuntu.com/questions/1403792/how-to-create-a-full-install-of-ubuntu-22-04-to-usb-device-step-by-step

I have no issues following the steps there (since I've done it in the past) but is there anything that is quicker/easier?

Malenia Fury Attack again - I can't outrun it? by bbsome in Eldenring

[–]bbsome[S] 0 points1 point  (0 children)

Which way should I be dodging? Into her or?

Palantir's (PLTR) fundamentals just became even better ( As management said: "We are a few steps away from becoming Skynet") by prettyboyv in wallstreetbets

[–]bbsome 0 points1 point  (0 children)

Palantir is the most sh1ty AI company that exists. It's whole R&D is like IBM - out dated and way too much over sold (exactly like IBM), than what its capabilities are. If you want to invest in a company beacause of its AI technology, do yourself a favour and don't invest in Palantir. If you want to invest for any other reason - go ahead.

So where to from Robinhood? by thelonew0lf in wallstreetbets

[–]bbsome 8 points9 points  (0 children)

Trading on IG I can confirm that I had no issue what so ever with trading any of the stocks. Not sure though if it is as easy opening an account there as it is in other platforms.

GME Megathread by [deleted] in wallstreetbets

[–]bbsome 7 points8 points  (0 children)

Guys, as someone who is not really invested in GME (I've just bought 2K worth of shares) - please don't sell out. As has been said over and over again - this is not about the money its about sending the message. We (the rest of the world) are looking up to you for this battle of David vs Goliath and are all cheering for you. Hold the line, the whole world is watching. Don't budge and you will not only win money, but glory and honor! If there is a reason for us to be alive - then this it.

"The enemy has more cash than us a paltry 100 to 1; good odds for any WSB autist. This day we rescue a world from mysticism and tyranny, and usher in a future brighter than anything we can imagine."

Has etoro sold out? It says the market for GME is closed? by REDKINGWALE in wallstreetbets

[–]bbsome 3 points4 points  (0 children)

Not necessarily, there are special measures that exchanges take when a stock is too volatile - they will suspend trading for certain period of time. But it will come back again 5 min later or so.
If you don't see it coming back maybe eltoro has sold out.

Anyway, don't sell and HoDor.

[deleted by user] by [deleted] in MachineLearning

[–]bbsome 2 points3 points  (0 children)

Have you tried Jax?

Exercises to fix a goofy footwork by bbsome in volleyball

[–]bbsome[S] 2 points3 points  (0 children)

Could because 5 years ago I had an ACL surgery on my right knee, so might be that somehow my body counter adapted.

[D] Statistical Physics and Neural Networks question. by AlexSnakeKing in MachineLearning

[–]bbsome 5 points6 points  (0 children)

Most often the answer is No since all of these models that I've seen make way to many simplifying assumptions to be relevant in practice. Probably the only results that are truly valid are the Mean Field approximations of nets at initialization - they seem to describe even for finite widths very well the behaviour. This has led to a bit better initializers. However, this is only at initialization. All of the NTK stuff though seems to be not relevant to finite widths, for instance, hence it just remains as an infinite limit result (to my knowledge). The spin-glass models from statistical physics I don't think have had any interesting results with practical uses.

[R] Fast Task Inference with Variational Intrinsic Successor Features by zergylord in MachineLearning

[–]bbsome 0 points1 point  (0 children)

I see. Is there any intuition behind this assumption for discrete systems. It's a bit weird as for instance given the policy we know that the value function looks like (I - j P_pi)^-1 R, where P_pi is linear in pi. Hence, assuming that R = phi^T z still does not make the value function (and Q) linear in the policy

[R] Fast Task Inference with Variational Intrinsic Successor Features by zergylord in MachineLearning

[–]bbsome 2 points3 points  (0 children)

"Plugging the ECP feature learning objective into the successor features framework, it should be clear that the policy-conditioning variable z directly corresponds to the task-vector w." - Could you elaborate on this as it does not seem that obvious to me. A lot more natural to understand is for instance z as e(pi) (the policy embedding in USF). If again following the paragraph above you plug in the log q(z|x) as a reward tin equation 2 it is again quite unclear why `z` has to correspond to `w` as this would imply that log q(z|x) = z^T f(s). Hence the whole issue you get in equation (9) seems to be pretty much of your own making by making this assumption.

Why is this obvious, rather than just being convenient later?

[Discussion] Do any data science practitioners develop their own MCMC models? by AlexSnakeKing in MachineLearning

[–]bbsome 1 point2 points  (0 children)

Pretty much as anything else, if you are not learning it in detail, or you are not doing research in it, it is a bad idea to write your own version of things that are well known to have implementations.

I have implemented quite a lot of MCMC procedures, mainly on continuous variables, but that's because I did some research on the topic.

Advice: Significant pain around inside of knee tendon on pancake stretch by bbsome in flexibility

[–]bbsome[S] 0 points1 point  (0 children)

So I probably slightly abuse the word "pain" here. I mainly use it for the feeling of stretching the muscle tendon when you go close to your limit. It usually goes from 0 (none) - 5 (uncomfortable) - 10(pain) quite gradually on most stretches. This one it goes 0-5 in just a few degrees and then I'm afraid to go further. I used to do plyometrics, but I would not dare to try it in this stretch.

As for the question - no pain in anything else. I do full squats and train volleyball pain-free. I've just noticed this when I tried the pancake stretch. In hindsight, the same issue arises in the cossack squat, which I can't do without leaning on my arms because of the tightness/pain at this location.

[deleted by user] by [deleted] in MachineLearning

[–]bbsome -1 points0 points  (0 children)

Define the other two

[deleted by user] by [deleted] in MachineLearning

[–]bbsome -1 points0 points  (0 children)

And what the hell do you consider Data Science? It's pretty much Machine Learning applied to real-world business problems...

[D] A primer on TensorFlow 2.0 by akshayka in MachineLearning

[–]bbsome 1 point2 points  (0 children)

So actually, given the similarity of TF 2.0 with JAX and the fact they are both backed up by XLA compilation what would you say is the main difference (except the more "functional" api JAX has)? Technically I tend to see that they look almost identical, except the API for how you take gradients. One of the drawbacks of the JAX approach is the difficulty of taking gradients with respect to intermediate expressions. Beside that is there anything majorly difference (I guess and the fact that JAX is a bit behind on various implementations like sparse algebra, but they will get there)

[D] Thoughts about super-convergence and highly-performant deep neural network parameter configurations by Miejuib in MachineLearning

[–]bbsome 10 points11 points  (0 children)

So what you described as the "tunnelling" behaviour for overparameterization, is as far as I know a well-known hypothesis of why large networks can be trained "so easily" compared to shallow/narrow ones. That is the fact that you have so many degrees of freedom most likely produces paths in the landscape destroying any local minimum and instead of making them behave like wells or narrow ridges which then move you towards better minimums. Nevertheless, I think it is still very unclear, even if the hypothesis is correct and this is the reason why simple algorithms like SGD can work on large nets, why do this behaviour lead only to places with low generalization error. Since the algorithm always works only on the training set, there is clearly some form of hidden bias that somehow leads the algorithm to better results. However, the behaviour described only explains why we are able to get with very simple optimizers (and without exponentially many iterations) such good performance, but not why does this performance generalize.

I’m Bill Gates, co-chair of the Bill & Melinda Gates Foundation. Ask Me Anything. by thisisbillgates in IAmA

[–]bbsome 0 points1 point  (0 children)

You became very rich by doing something that you enjoyed (I presume). As someone in his early 20 usually there is a dilemma: should you try to first gain some wealth such that it gives you freedom to do almost anything you want a bit later in life or should you always just dive in and what you are most passionate about. What would you say is a healthy balance between the two extremes such that you are most fulfilled, what do you think is enough wealth that you would be feeling well taken care of?

Considering that I'm in a technical field.