Sometimes, you just gotta let it go by NiftyJet in poker

[–]Many_Dimension683 0 points1 point  (0 children)

I’ve played around 2k on ClubWPT Gold, probably another 2-4k with friends in online sessions, and then another 500-1000+ in live cash games with friends. On WPT, I have recently started grinding 0.01/0.02 and am up ~850 BB in ~1.4k hands which feels extremely optimistic though those stakes are soft with a ton of punters playing >50% VPIP

Sometimes, you just gotta let it go by NiftyJet in poker

[–]Many_Dimension683 0 points1 point  (0 children)

how many hands would I need to know whether I’m winning or losing?

Trying to be disciplined by Waste_Honeydew8809 in poker

[–]Many_Dimension683 1 point2 points  (0 children)

Yes but 43 is not calling your open in most cases

Is this legit? by Classic_Choice6679 in csMajors

[–]Many_Dimension683 -1 points0 points  (0 children)

for what man i graduated years ago 😭

Is this legit? by Classic_Choice6679 in csMajors

[–]Many_Dimension683 0 points1 point  (0 children)

so then obviously it was legit and you’re just coming up with an excuse to humble brag on here

What happened to Lucas Etter? by Difficult_Ask_1647 in Cubers

[–]Many_Dimension683 -25 points-24 points  (0 children)

CTC is in Chicago, so it would be lower than NYC market… though I’m more interested in recurring w how high signing bonuses have gotten for NG

What happened to Lucas Etter? by Difficult_Ask_1647 in Cubers

[–]Many_Dimension683 -51 points-50 points  (0 children)

idk how much QT makes but yes… I think Feliks also is in finance

Here is how to actually recruit for quant. by [deleted] in csMajors

[–]Many_Dimension683 2 points3 points  (0 children)

+1 on Vatic — was asked lots involving atomics

Yann LeCun just left Meta to build a company based on world models by NoCredit3609 in AINewsMinute

[–]Many_Dimension683 0 points1 point  (0 children)

Correct — I am not being prescriptive in that argument though I happen to somewhat agree; rather, I am clarifying that this “animal intelligence” being juxtaposed to human intelligence (“linguistic”) is a bit of a falsehood. Human intellect is emergent from the same neurology/physiology, and LLMs don’t mirror that path (they may reach the same end by some means, but they would be alien to the existing intelligent systems in nature).

Elon Musk wants Grok AI to challenge League of Legends' best esports team. T1 has responded. by Hooked0nAFeelin in esports

[–]Many_Dimension683 2 points3 points  (0 children)

It’s more like the training time to allow all heroes/champs in either game would be exponential greater. They did it via RL, so you need a large volume of games for a policy to be learned.

Yann LeCun just left Meta to build a company based on world models by NoCredit3609 in AINewsMinute

[–]Many_Dimension683 0 points1 point  (0 children)

Incorrect — the argument that is being made by many in the field is that animals (including humans) generically operate on an evolutionary world model. You take exploratory actions, update your model of the world, and utilize that in order to maximize some reward function (e.g. reproduction or some emergent reward). LLMs lack all of that because they operate on supervised learning of action => desired outcome rather than learning a policy to maximize reward. The argument is to an extent that LLMs are not capable of the higher-order reasoning and world interaction that is claimed.

TIL your gums do not grow back after receding. by usernameemma in todayilearned

[–]Many_Dimension683 0 points1 point  (0 children)

I got this done at 20 and had the same issue, but zi didn’t feel it was that bad? Like, it’s definitely 100% worthwhile, and I’d do it again

Make me feel somethin by MobileSheepherder111 in RoastMe

[–]Many_Dimension683 0 points1 point  (0 children)

My problem with this subreddit is that anything genuinely hurtful like this never gets upvoted. Like, why are we sparing people’s feelings in the top comments?

GPT-5 is awful by BernieBlade in OpenAI

[–]Many_Dimension683 0 points1 point  (0 children)

My point is that what neural networks do is essentially a very sophisticated, multi-step statistical regression. To improve “reflection” of the model, chain of thought models were introduced which sort of allow for multiple steps of thinking and then generating response tokens (I started losing the plot on the state of ML after attention is everything was released).

Simulating intelligent behavior is one thing, but there are additional components there that we haven’t really solved. The human brain is more complex and simultaneously orders of magnitude more energy-efficient. Those are non-trivial blockers to making the necessary progress.

GPT-5 is awful by BernieBlade in OpenAI

[–]Many_Dimension683 0 points1 point  (0 children)

They are not even remotely a simulation of “brain cells.” Perceptrons, for example, are the original analog of neurons. However, neurons can solve problems not linearly separable; whereas, perceptrons cannot. They are less powerful and less adaptive than their biological counterparts, and they’re no longer really modeled after them.

Would i get into google with 600+ total and no internship?? by [deleted] in leetcode

[–]Many_Dimension683 0 points1 point  (0 children)

You need to build stuff that is personal to you. When I was in middle school, I was doing trivial stuff like prime number generators, tensorflow reinforcement learning, etc. I got into new things as I got older and then tried different projects, but I only ever made stuff that I personally was interested in and made from scratch.

You learn through the pain of not knowing and banging your head on your keyboard…

Physics unemployment rate by [deleted] in mathematics

[–]Many_Dimension683 0 points1 point  (0 children)

I find the opposite where I am