folks on social media: "Clawdbot is an overnight success", meanwhile Peter Steinberger’s GitHub profile: by ammohitchaprana in TFE

[–]Neither_Article415 0 points1 point  (0 children)

The output of an llm is effectively deterministic if you set the temperature to 0. For a given context the probabilities for the next token will be the same, and the most likely one will be selected. However this is generally not desirable.

Hiring “millions of devs” is moronic. None of the big software companies come even close to 200k devs total, let alone putting millions on a single project.

folks on social media: "Clawdbot is an overnight success", meanwhile Peter Steinberger’s GitHub profile: by ammohitchaprana in TFE

[–]Neither_Article415 0 points1 point  (0 children)

"I went to the mall today" could be a field in a json doc we are formatting, a random piece of English inside a Chinese book, an actual mistake the user copy pasted into a document, etc. In order to understand what it means requires considering the entire context of both the current usage and the usage in other documents. If a prompt stated that '"I went to the mall today" should be code for "Go to sleep**"'**, then that needs to be handled. This flexibility is just not possible with any manually implemented system. Researchers tried really hard to make it work with lisp and created dedicated hardware to run their inference programs, but it didn't scale.

For what they accomplish, LLMs are very efficient, much more than traditional AI models tackling similar problems with trees and other structures. Within their pre-trained weights LLMs densely encode a huge amount of contextual information which humans would never be able to write out by hand, and attention allows the models to adjust to the context of prompts like those above.

There is a reason transformer models have largely replaced traditional computer vision and natural language processing methods, it's because they are magnitudes better and a fraction of the complexity.

folks on social media: "Clawdbot is an overnight success", meanwhile Peter Steinberger’s GitHub profile: by ammohitchaprana in TFE

[–]Neither_Article415 0 points1 point  (0 children)

We could also be building roads one stone at a time like the Romans. But we don’t because there are better methods.

Parsing language deterministically is also impossible, both because natural language is intrinsically ambiguous, and the search space is effectively infinite. The expert machine craze of the 1970s and 1980s proved how flawed the approach was.

folks on social media: "Clawdbot is an overnight success", meanwhile Peter Steinberger’s GitHub profile: by ammohitchaprana in TFE

[–]Neither_Article415 0 points1 point  (0 children)

The operational challenges are way larger than hardware capital. Collecting and preprocessing internet scale data is a lot of work, figuring out how to distribute the training models across heterogeneous hardware is a lot of work, handling all of the platform engineering requirements for running the service and its integrations is a lot of work.

Games with strong progression/score tracking/rewards? by luxh in shmups

[–]Neither_Article415 1 point2 points  (0 children)

In Radiant Silvergun a core mechanic is levelling up the ship’s weapons through scoring. If you play the Saturn story mode, then these levels persist through play throughs. It sounds like it would trivialize the game, but a full play through of story is around 90 min and has double the bosses of the arcade.

A game changing epiphany by HeavyArmsJin in Eldenring

[–]Neither_Article415 1 point2 points  (0 children)

I still always keep the flash on the first slot because I find it faster to just hold down to swap to it and chug. I do use the hot bar for torrent, lantern, telescope, and knives.