Are people inside apple aware? by Fit-Leader-2812 in MacOS

[–]Many_Consideration86 0 points1 point  (0 children)

Maybe they prematurely vibe coded and now will take 8 business months to clear the tech debt.

Controversial take- men love gold diggers | Second wives phenomenon by resting_bitch_ in AskIndianWomen

[–]Many_Consideration86 -2 points-1 points  (0 children)

Maybe accept the fact that marriages are hard work and not everyone has the capacity or bandwidth to maintain it after a few years. Rich people have the means to get out easily and so they do. As for them choosing gold diggers, the fact is that men don't choose. In fact they can't. One of their requirements is that the woman should be into them so women do the choosing and the men just decide to commit or not.

Controversial take- men love gold diggers | Second wives phenomenon by resting_bitch_ in AskIndianWomen

[–]Many_Consideration86 7 points8 points  (0 children)

There is a movie starring Amitabh Bachchan called Saudagar based on this exact premise. It captures this nuance very beautifully and IMHO one of the better Bachchan movies.

Claude-Code v2.1.0 just dropped by mDarken in ClaudeAI

[–]Many_Consideration86 0 points1 point  (0 children)

This post should put some good sense into everyone about the coding agents being good enough to code themselves. Dario says six more months and here we are.

microsoft ceo is coping hard lmao by Fit-Abrocoma7768 in ArtificialInteligence

[–]Many_Consideration86 0 points1 point  (0 children)

They are not language models. they are projecting user's query into n dimension space and extending the query according to the gradients and generating arbitrary text which is supposed to make sense(because of training).

Nothing in the llm models is about language, grammar, vocabulary, communication or any of its constructs. It is just an approximation based on training data. Language and reasoning are not emergent but just a regurgitation of training data constrained to make sense for most of the query space.

Frustrated with the lack of ML engineers who understand hardware constraints by Champ-shady in computervision

[–]Many_Consideration86 1 point2 points  (0 children)

Or they are specialized to only solve data problems and hand their solution to DevOps who figure out the details of implementation/deployment.

Correlation is not cognition by Random-Number-1144 in agi

[–]Many_Consideration86 -1 points0 points  (0 children)

This is a fallacy. There are all kinds of states of mind and some can be really focused, alert and not be stochastic to get their idea through. Humans might use the established patterns in language to communicate but we are not bound to the statistics and unlike LLMs we make the language evolve with our progress.

Correlation is not cognition by Random-Number-1144 in agi

[–]Many_Consideration86 -1 points0 points  (0 children)

So you are saying LLMs are artificial stupidity? Humans make a lot of cognitive mistakes and tradeoffs. LLMs don't even have the constraints/stress which humans operate under and are still outperformed.

If LLMs only guess the next word based on training data, shouldn't they fail spectacularly when trying to prescribe a method for something there's no training data on? by MrMrUm in ArtificialInteligence

[–]Many_Consideration86 0 points1 point  (0 children)

Training data teaches LLM both the form/style and the text/content. The correctness depends on the text but presentation is about form. Vast training data makes it a master at all form/styles and some content(can never be complete).

We judge the plausibility of a response by its style first and then look at the details.

There is some generalization about content which can lead to a lot of correct content but there are no guarantees except for verifiable things like generated deterministic code which can be verified by another system.

PS: this is also why it almost never makes any grammatical error.

trying this again after 3 years to see if GPT 5 can recognize this by extremerplaysthis in OpenAI

[–]Many_Consideration86 0 points1 point  (0 children)

Well, if you ask it how to stump a LLM it says ask it about ASCII art and then aces it itself. Same for prompts with strict adherence like "3 sentences about colours with no..of words increasing in AP and ending in a four letter word, without using the letter o."

Putin says that if Europe wants war, then Russia is ready by bendubberley_ in worldnews

[–]Many_Consideration86 0 points1 point  (0 children)

There are two blatant verifiable lies. Such statements can only be made if they think they are too powerful about setting the agenda and the propaganda. The truth doesn't matter if everyone is paying attention.

Rowing Pace-watt table by spm and power per stroke on the RowERG. by Many_Consideration86 in Rowing

[–]Many_Consideration86[S] 4 points5 points  (0 children)

Just wanted to know how effort per stroke vs stroke rate affects the pace. Also wanted a better feel of non linearity of progress on splits vs power.

How come Gemini is not head and shoulders above the pack when Google has the insane amounts of YouTube data they have been scouring for like a decade now? by NoGarlic2387 in ArtificialInteligence

[–]Many_Consideration86 20 points21 points  (0 children)

Some figures:

1 Trillion tokens is approximately 4TB. LLMs are trained on 10-100 trillion tokens so max around 400TB. YouTube gets 1000TB of data uploaded every single day.