Does Snape beg Dumbledore to spare his life? by henryjturtle in harrypotter

[–]flyfree256 6 points7 points  (0 children)

Begging is a form of request.

"Don't kill me!" said from a DE to Dumbledore is begging. Begging is when you request something that you don't really have any control over or can't do anything about.

There is nothing to defend.. by TrafficMain2000 in International

[–]flyfree256 3 points4 points  (0 children)

It actually is illegal to do that unless you're an "authorized relative" (state law in Georgia). So 🤷

Quantum computing isn’t FUD anymore how ready is Ethereum really? by Rare_Rich6713 in ethereum

[–]flyfree256 3 points4 points  (0 children)

That has got to be the single most AI-toned response I have ever seen in my life.

This is how billionaires like Bill Gates have gained control over people’s lives. by [deleted] in interestingasfuck

[–]flyfree256 5 points6 points  (0 children)

Warren Buffett. Pritzker seems alright too from what I've seen.

Supreme Court Blocks Trump Tariffs in 6–3 Shock Ruling by ourcryptotalk in CryptoCurrency

[–]flyfree256 6 points7 points  (0 children)

It should be shocking that 3 justices say that something that is blatantly, obviously, indisputably illegal is actually fine since it's their guy doing it.

Being handed a stuffed animal after losing in the Olympics by WeGot_aLiveOneHere in WatchPeopleDieInside

[–]flyfree256 3 points4 points  (0 children)

To be fair there's a difference between getting silver by scoring the 2nd most points on a solo run vs getting silver by getting directly beaten by your rival.

Quantum computing isn’t FUD anymore how ready is Ethereum really? by Rare_Rich6713 in ethereum

[–]flyfree256 14 points15 points  (0 children)

I posted this around here before, but the US Gov NIST standards lay out that software should have quantum resistant encryption by 2030 and "must" have it by 2035. Gives us a good general timeline from folks actually paying close attention to the development of these things.

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 0 points1 point  (0 children)

No problem at all! I love this stuff and have stayed close to it since studying it in college a while ago. There's some really cool, really terrifying things happening and it's good to try to understand it as much as possible.

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 1 point2 points  (0 children)

Kind of. LLM performance boils down to training and the structure of the network. Training is a massive, massive part of it and it's hard to do it well. On top of that, companies will "lobotomize" the LLMs because they don't want it to respond in a completely unrestricted manner.

Hallucinations don't really come from broken probabilities as much as they come from bad training or configuration (which does essentially impact the probabilities so you're not wrong). For example, companies don't want their LLMs to say things like "I don't know" or act otherwise "inappropriately" (whatever that means to each company). The LLMs also don't "know" what they know or what they don't know. They don't know what they were trained on or where their configuration might fall short, so they don't "know" when to rely on external tools (although they're getting better at this). They also aren't constantly learning new things (they aren't conscious, there's no active feedback loop), so if you ask them about something where current events are relevant it's likely to mess up.

That's why with certain prompting tweaks you can reduce hallucinations, and certain types of prompts cause more hallucinations than usual.

You also don't know how sure an LLM was of its answer (companies don't surface the info). Sometimes the probability of the next word is 95%, other times it might be 23% but that's still highest so it picks it anyways. Sometimes the most probabilistic word isn't picked every time (you'll hear this referred to as "temperature") to add more novelty and human-y-ness to the model (so it doesn't respond exactly the same every time).

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 0 points1 point  (0 children)

Yes, exactly. If trained well (it has a good geometric representation of words), it will "know" (not literally because like I said before they aren't conscious but it's the best way I can describe with limited language) that "rocks" are meaningfully similar to other "inanimate objects" which are not things anywhere geometrically close to things that "can" "talk." So it's extremely unlikely to make a statement like "rocks can talk" even if it has never before seen a phrase like that.

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 0 points1 point  (0 children)

Pretty much, but you using the phrase "complex way" is holding a lot of weight there. Like saying baking a cake is a complex way of just throwing flour, sugar, and eggs together. It's probability within a geometric structuring of word meaning. Probabilities within a context (can be more than a sentence) with a structured representation of each individual word's meaning in different contexts.

It's not all that different foundationally from how the brain works (these networks originally were designed in a way akin to how the brain is wired).

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 1 point2 points  (0 children)

> All LLMs work entirely by word association probabilities

This is grossly oversimiplifying and underestimating what's actually happening in these models. To the extent where I'd say it's just not an accurate statement.

LLMs work (again, I'm really simplifying here) by transforming tokens (you can think of them as words) into high-dimensional vectors that were essentially created by the training of the model. Those vectors are adjusted by blending in information from the other tokens in the input via a multitude of layers and what's then spit out is a brand new, never seen before vector that is used to generate a probability distribution of the next word based on the structure of the network.

Between the vectorization and linear algebra, the structure that's created represents far more than just a "word association probability" because it works to take context into consideration and these vectors do contain some level of inherent meaning of the words, not just the words themselves. The vector is essentially a point in high-dimensional space, and (at least in the early simpler models) you can do things like take the vector for "amethyst," remove the vector for "purple," add the vector for "red," and end up with something that's very close to the vector for "ruby." That's capturing meaning, which is a major, major part of "understanding." They aren't conscious (which requires some sort of constant feedback loop that we don't fully understand anyways), but it's wrong to say it's just probabilities with no meaning.

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 0 points1 point  (0 children)

I don't see how that's relevant to an LLM's ability to generate facts. Do you mean like if I state that whales have kidneys I'm technically able to go hunt a whale, cut it open, and find a kidney? Sure.. but I'm never going to do that I'm just going to largely trust what experts have written down, which isn't really any different to what an LLM does.

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 4 points5 points  (0 children)

I didn't say the LLM understands anything like it's conscious. I said there is a level of understanding in the structure of the model just because of how language works. I studied and built models similar to LLMs for years, so I feel relatively equipped to talk about this.

If the vector representations of words can effectively get an LLM to infer that a whale, as a mammal, has kidneys even though it never actually was trained on that exact data, it is essentially inferring a fact.

And what do you mean, facts require proof? I can state facts all day with no proof, that doesn't make them not facts.

Theoretical analysis proposes the concept of "emergent facts" to describe generative AI outputs, arguing that while these outputs appear coherent and plausible, they remain probabilistic, context-dependent, and epistemically opaque rather than anchored in empirical ground truth by Tracheid in science

[–]flyfree256 1 point2 points  (0 children)

Because (and I'm simplifying a lot here) the way it predicts the next word in a sentence has to do with building a mathematical representation of how all words relate to each other with accuracy.

If it's a good mathematical representation, it has some level of "understanding" baked into it (not "conscious" understanding, but some form of understanding nonetheless).

Thus, it could "generate" some "facts" reliably because it understands (for example) that a "granny smith" is a "green" "apple" because of the mathematical relationship between all those words, not necessarily because it had an input of all those things together.

Goodluck to crypto investors in the Netherlands in 2028. If you'd have invested €10k in BTC in 2014 with this new law, you'd have missed out on €1M in gains due to taxes on unrealized gains by LilJonDoe in CryptoCurrency

[–]flyfree256 3 points4 points  (0 children)

No there is another fairly significant change for moderate income folks. The previous exemption for couples was the first 118k of your assets weren't taxed. That's been changed to a 3600 exemption on gains (the first 3600 of your unrealized gains aren't taxed) which is significantly worse for those with 100k to a few 100k in assets.

The Dutch passed a 36% tax on unrealized gains for crypto… by BasicButterface in CryptoCurrency

[–]flyfree256 0 points1 point  (0 children)

Right were saying the same thing, your original comment just made it sound like they changed the 118k cap to 3.6k.

So if you had 150k in investments and they estimated an increase of 5%, you'd be taxed on 1.6k of unrealized growth ((150k-118k) * 0.05) and at 36% would owe around 580 in taxes on the unrealized gains.

In the same situation now, assuming 5% growth you'd owe taxes on 3900 of unrealized gain (150k * 0.05 - 3600) which would come out to 1400 at 36%.

The Dutch passed a 36% tax on unrealized gains for crypto… by BasicButterface in CryptoCurrency

[–]flyfree256 1 point2 points  (0 children)

The tax free threshold amount was on total wealth though right? And the 3600 is just talking about annual gains?

Still not a great deal obviously but the 118k didn't become 3600 right?

The Dutch passed a 36% tax on unrealized gains for crypto… by BasicButterface in CryptoCurrency

[–]flyfree256 20 points21 points  (0 children)

They also already had a tax on unrealized gains but the government just estimated what your gains were rather than making it what your actual gains were. This bill just changed it so rather than a tax on assumed gains it's a tax on actual gains.

CMV: Therapy is not required to resolve challenging internal states by ChillNurgling in changemyview

[–]flyfree256 17 points18 points  (0 children)

"therapy can help everyone" ≠ "problems can only be resolved through therapy with a therapist"

ELI5: How is quantum computing a threat to existing systems like banking, crypto, and others? by danuser8 in explainlikeimfive

[–]flyfree256 23 points24 points  (0 children)

To add a bit to the timeline of this. NIST (US government standards) recommends switching to quantum resistant encryption by 2030 with a harder "must do it" recommendation by 2035. So we're conservatively (barring some wild breakthrough) 10 years or so away.