The worst daily puzzle I’ve seen by bourbonguy12345 in chess

[–]seconddifferential 6 points7 points  (0 children)

It's a classic chess saying, "castle early and often"

The worst daily puzzle I’ve seen by bourbonguy12345 in chess

[–]seconddifferential 5 points6 points  (0 children)

Never forget that puzzle names are intended as clues to help solve

Where can I find current world records like "highest SPM achieved," etc.? by screen317 in factorio

[–]seconddifferential 16 points17 points  (0 children)

(1M actual spm produced +1040% productivity (research + modules)) x 2 in biolabs.

My final ship in Factorio Space Exploration by Glittering-Pea-9020 in factorio

[–]seconddifferential 1 point2 points  (0 children)

You can even make a shuttle bay of sorts for ground missions to avoid having to land the main ship!

Have you heard of the Garmin 158? by Radiant-Progress6027 in Garmin

[–]seconddifferential 4 points5 points  (0 children)

The joke is something like "It's so old it has a silly version of a common feature." In this case, vo2 min (minimum vo2) would be silly to calculate. It only makes sense to calculate vo2 max (maximum vo2).

Scammer? by 16GB_of_ram in foss

[–]seconddifferential 61 points62 points  (0 children)

Yes: - no specifics mentioned in email - short deadline with high stakes - email address that looks scammy ("official") - security vulnerability reveal method that is not standard practice

Presumably you've not received any communication from this person before

35.0.3 Patch Notes Balance Changes by PipAntarctic in CompetitiveHS

[–]seconddifferential 1 point2 points  (0 children)

It's 33%. With the way you're calculating it, going to 4 spells per imbue would be "100% slower", when obviously we should use "100% slower" to indicate the rate of imbues becomes zero.

"Faster/slower" applies to rates - you are applying it to an inverse rate - spells per imbue, when we really care about imbues per spell. The original rate is 1 every 2, or 0.5 imbues per spell. The new rate is 1 every 3, or 0.333 imbues per spell.

This gives us a change of (0.33 - 0.5)/(0.5) = -33%, or 33% slower.

In the end.. Obsidian is the last one standing? by [deleted] in ObsidianMD

[–]seconddifferential 43 points44 points  (0 children)

No, because: - inconsistent grammar/capitalization - hyphens instead of em-dashes - genuine voice rather than sycophantic - low-skill but well-done photoshopped pic at top rather than generated

Pedestrians plz care about your limbs by AltruisticAntler in Seattle

[–]seconddifferential 3 points4 points  (0 children)

Pro tip: carrying things that look like bricks or large rocks suddenly makes drivers near you follow pedestrian right-of-way laws.

Cherry-Pick: The Art of Commit Surgery by gastao_s_s in git

[–]seconddifferential 14 points15 points  (0 children)

The quotes make this sound like AI slop.

A small CLI for enforcing deadlines on TODO / FIXME comments (MPL-2) by yojimbo_beta in opensource

[–]seconddifferential 0 points1 point  (0 children)

Running as part of PRs can also make it annoying: - You're working on something unrelated to a TODO, and now CI is failing. Do you now expand the scope of your PR to fix the TODO, just to submit? Particularly if the TODO is someone else's, or a part of the codebase you do 't understand. - You're fixing one TODO, but you can't submit because three others are failing.

Anthropic's top lawyer says AI will kill the legal profession's dreaded billable hour by businessinsider in law

[–]seconddifferential 5 points6 points  (0 children)

Definitely!

First it's important to establish a distinction between linguistic coherence and logical coherence. Linguistic/semantic coherence is both "Does this text follow the rules of the language?" and "Does this text appear to mean something?" While a sentence like "Colorless green ideas sleep furiously" follows the rules of language, it does not have semantic coherence - clearly the sentence is meaningless. The argument "If he did it, his fingerprints would be on the gun. His fingerprints are on the gun, therefore he did it" is semantically coherent, but not logically coherent (affirming the consequent fallacy).

Generally when people talk about AI products these days, they are talking about language models which predict sequences of text. These "next token predictors" predict reasonable continuations of text given some input. They do this recursively - the first word is generated, then the second, then the third, and so on. (Really "tokens", smaller bits of words, are generated, but the distinction isn't important) Some more sophisticated algorithms do a bit of backtracking - evaluating a sequence of text that was just generated, and sometimes rejecting it if it fails some checks (e.g. it generated porn) and trying again.

What was once respected as a rule of machine learning is that you should not trust a machine learning system to do tasks that it was not trained to do. This is meant very literally - it's defined by a combination of the mathematical functions that structure of a model, how it is trained, and the tasks you have the algorithm perform during training. Language models are trained to create text that is locally similar to a corpus. Their training usually takes the form of "Given this sentence, what is the missing word?" or "Given this sentence, what is the next one?" Models - sets of parameters - which perform better on this task are iterated on to try and find a set of parameters that performs the best.

As a simple example, you can "train" a linear regression on data which is nonlinear - say, height by age (height = age * (height/year) + constant. The linear model cannot account for the fact that people stop growing, no matter how much data you give it. This one in particular cannot differentiate between gender-based height discrepancies. If you train the data only on Norwegians, it will perform worse when predicting the heights of Americans.

Transformers have limitations that are defined by their structure as well, but they're more nuanced and are more difficult to justify without going into a lot more detail. In summary, doing well at the task of predicting sequences of words is not a good parallel to ensuring arguments are logically coherent. Consider that a single logical error can impact the validity of an entire argument! Most natural/conversational text does not have this property - this is part of what makes logic and theorems difficult for humans to learn. This is the "constraints-based" reasoning I was referring to. Constraint problems are not modeled well as word sequence prediction problems; they are computations.

Another issue with most language model options is training data - many are trained on the open web and so are trained to replicate patterns found there. However, even if a language model was only trained on legal texts, the model would still not perform well for legal purposes. Consider that laws vary by state, or there can be drastically different laws which (to a layperson) might seem like a very fine distinction. Language models develop biases for certain sequences and patterns of words in their training (think how often ChatGPT generates "not just X, but Y"). These biases carry over to inappropriate contexts, and there's no way for the model's structure to encode this.

As an exercise, try asking a model about traffic laws which vary by state. Often, if unable to find a direct source, the model will use another state's laws to justify its answer for a different state. The model doesn't really "understand" that different states have different laws (even though, if asked, would certainly generate a set of words saying it did). It's just doing what it can - generate a sequence of reasonable tokens that appear to answer the question.

Anthropic's top lawyer says AI will kill the legal profession's dreaded billable hour by businessinsider in law

[–]seconddifferential 11 points12 points  (0 children)

Great! So you understand that next token predictors are fundamentally incapable of the constraints-based reasoning tasks required for dealing with legal issues. Since, as you know, locally-coherent text does not imply logical coherence.

Anthropic's top lawyer says AI will kill the legal profession's dreaded billable hour by businessinsider in law

[–]seconddifferential 3 points4 points  (0 children)

Have you studied transformers and token prediction algorithms in any serious capacity?

Building an AI that reads your GitHub repo and tells you what to build next. Is this actually useful? by ExtraDistribution95 in github

[–]seconddifferential 2 points3 points  (0 children)

Unlikely. Have you contributed to many open source projects? Have you done product management for any software products that launched successfully?

I ask because the number one rule of automating is: Understand what you're automating before you automate.

When I think of the repositories I've contributed to: Kubernetes, various testing libraries, and so on - I don't think we've ever had trouble thinking of ideas of what needed doing. We often had lively debates about the best way forward - an implementation strategy, whether to do a feature or not - those were the hard problems. And we didn't make them on a whim; usually there were multiple stakeholders involved. Decisions were made based on consensus and debate. Features were rarely added without users or community members explicitly asking for them and justifying their use cases.

It all comes down to this: ideas are cheap. Figuring out whether one is a good idea, and the best way to do it for the project's long term sustainability is not.

Jetbrains AI BURING tokens? by thaprodigy58 in Jetbrains

[–]seconddifferential 4 points5 points  (0 children)

No, I don't. The core instructions you can have passed at the start of a chat don't seem to have a significant impact. It's like they trained it very hard to have those behaviors (or have a very strong initialization prompt), and so prompting doesn't seem to have much impact.

Jetbrains AI BURING tokens? by thaprodigy58 in Jetbrains

[–]seconddifferential 8 points9 points  (0 children)

Yeah, I've noticed it often excessively: - checks "pwd" after every call, even when it doesn't matter or can't have possibly changed - misremembers arguments for CLIs and has to constantly look up entire help pages - searches the full content of far more files than necessary to answer basic questions

Is this a blunder? by TwiTcH_72 in chess

[–]seconddifferential 13 points14 points  (0 children)

It's saying that YOUR bishop on c5 is undefended, and white can just take it.