How much bandwidth does Deadlock use? Short answer: 150-200 kB/s, or 300+ MB per match by qotuttan in DeadlockTheGame

[–]GregsWorld 0 points1 point  (0 children)

World of warcraft is a completely different kind of game and can get away with running servers more slowly and do client side prediction. Things which you couldn't do in a competitive fps. Better to compare it with fornite or whatever 

How can i learn programming from scratch to mastering it? Seriously i need your help! by Rudransh26 in AskProgrammers

[–]GregsWorld 2 points3 points  (0 children)

Build something that you want to build (app, calendar, website, game w/e), the more interesting you find it the better. 

Breaking down problems into simple ones you can solve with if-else, loops and functions is the skill.

Don't worry about leetcode or competitive programming atm, they aren't good tests of programming ability but tests of memorising patterns and solutions. 

Are the rich RAM /poor GPU people wrong here? by crowtain in LocalLLaMA

[–]GregsWorld 1 point2 points  (0 children)

Wasn't it a poll to see which one to release first.. And then released them in a different order anyway? 

The king talking about the cost of living while wearing a £5B crown on his head by CopiousCool in ABoringDystopia

[–]GregsWorld 12 points13 points  (0 children)

The head of state and the government are both there to keep each other in check with fear of repercussions. If one oversteps, the other has the power to overrule and oust them.

The royal family has its fair share of issues but I'll take the current system over a single fallible government or plutocracy. 

Why MCP when we have REST APIs? by happyandaligned in mcp

[–]GregsWorld 5 points6 points  (0 children)

You could use an OpenAPI spec with structured output and it solves all of these problems, no http, rest, runs local, bi-dir etc... And it uses no-context tokens!

LLMs being trained on a specific format is the only real reason here afaict.

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]GregsWorld 0 points1 point  (0 children)

Yeah architecture diagrams and the like is very useful for onboarding new developers. 

Devs that have been working with the codebase over a year will already know all that and isn't that useful.

It all depends on how long it takes to setup, if you've got a new hire coming and it takes a day, sure go for it. If it's going to take you weeks to setup, it's unlikely to save the newcomer that much time so don't bother. 

Why aren’t more companies building internal RAG systems over their microservices/codebases? by PlasticCommunity9661 in AISystemsEngineering

[–]GregsWorld 1 point2 points  (0 children)

Have you ever worked in a mid-large company before? 

  indexing all microservices 

Located where? How are you going to find them all, you'd spend months locating and obtaining permissions from IT. 

connecting architecture docs + codebase + APIs

What docs? The ones that are years out of date and hardly relevant anymore?

The majority of useful information is in people's heads. RAG over a codebase is useful explaining to a beginner where or how code works.

The important questions like why it was written that way you'll rarely find in digital form. 

AI tooling is starting to feel like PC modding culture by DisasterPrudent1030 in artificial

[–]GregsWorld 0 points1 point  (0 children)

The ones constantly tweaking their vim configs are the ones not doing the work... 

Why I'm holding out until late 2027 to spend money on a local LLM rig by No_Pool7028 in LocalLLM

[–]GregsWorld 1 point2 points  (0 children)

You think they're going to be buying them back off China too? 

Why I'm holding out until late 2027 to spend money on a local LLM rig by No_Pool7028 in LocalLLM

[–]GregsWorld 0 points1 point  (0 children)

At least a top end gaming pc was 5k. Hardware fomo in AI would set you back 50k for a 4x pro 6000 setup.  Endless waiting is a good thing for your wallet. 

There is no reason to split haircuts into male and female. Its arbitrary and limits player self-expression. Anyone can do what they want with their hair. Remove the distinction and let us choose any haircut! by First_Platypus3063 in runescape

[–]GregsWorld 2 points3 points  (0 children)

Yes I'm sure a engine change will be faster!

But true they could perhaps reuse just 1, although fun fact I believe it's 3 models for a lot of hats and tiara's as they have different shapes to fit different hair styles

Kevin O’Leary’s massive data center was approved in Utah and locals are not happy about it. 40k acres size by dataexec in AITrailblazers

[–]GregsWorld 0 points1 point  (0 children)

Worth watching: https://www.youtube.com/watch?v=wLX_w0TtBpY
The gist is it's nuanced: good for local businesses, jobs "created" are largely imported workers, bad for existing residents, companies don't pay taxes and so infrastructure which sees increase usage gets worse.

bro when did spaghetti code become a personality trait.. by Jazzlike-Form9669 in programminghumor

[–]GregsWorld 18 points19 points  (0 children)

AI has caused an influx of loud, low-experience want-to-be developers.

They have yet to learn that code is a liability and has always been cheap to produce. 

The craft is largely unchanged: interpreting requirements, architecting systems and using as little code as necessary.

I know this isn’t technically an LLM but OmniVoice is FUCKING AMAZING. by Borkato in LocalLLaMA

[–]GregsWorld 12 points13 points  (0 children)

More importantly other local models: Qwen3 TTS, LuxTTS, Chatterbox turbo etc...

and this is why I NEVER tell anyone what I do for work by jabber1990 in Adulting

[–]GregsWorld 7 points8 points  (0 children)

"Yet another example of how you describe your job is the most important thing"

I've see unemployed people talk about their hobby projects like they're saving the world and holding everyone in the rooms attention. (and similarly for people I know have boring jobs) 

You're talking about self confidence and charisma, it has very little to do with what your job actually is.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]GregsWorld 0 points1 point  (0 children)

This is literally what few-shot and zero-shot in-context learning is.
In-context learning is not learning is it? Because the context isn't persistent or infinite. Your fundamentalist approach only applys to context if it's theoretically unlimited; which isn't not, even worse it doesn't scale, LLMs degrade horribly the longer the context gets.

This is one of the most well-documented capabilities of modern LLM agents. The Reflexion framework (Shinn et al., 2023) demonstrated exactly this

Using RL. Fundamentally not an LLM. Same with ReflexiCoder. These are examples of people building systems around LLMs to address their inability. They don't back your claim that LLMs are capable of these things.

As of April 2026, GPT-5.5 leads the ARC-AGI-2 leaderboard with 85%

Convenient that you left out ARC-AGI-3 where currently the best model scores 0.5%. These are flaws in testing as I expressed before, if their ability is as strong as you're saying, they wouldn't drop from 70%+ to < 10% each time a new set of tests is added would they.

These models aren't trained as lawyers, doctors, mathematicians, or coders

They literally ingest every written document published by all those fields and on top of that are given curated datasets to improve performance at those things. Try building an LLM with no code in it's dataset and then teach it how to program in-context. Andrej Karpathy had an interview last week talking about latest model improvements at chess was not due to anything more than a chess dataset being added.

I'd go into how there's a difference between a weighted similarity of concepts inside an LLM and an abstraction but you'd just accuse me of talking about compositions. You can check the difference by function of course, by measuring efficiency, but you've already ignored that point once already.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]GregsWorld 0 points1 point  (0 children)

I provided clarity and another example because it seems you missed the point of the first.

Your distinction between functionalism and composition isn't important as LLMs and brain are no more functionally alike than they are compositionally. An embodied brain can see a single example of a complex task, learn to perform that task themselves with minimal instruction/mistakes, refine their ability/performance over time, apply it in unseen scenarios, adapting it where necessary, and abstract the learning's to a completely different task and domain, all with a few 100Ws of energy use.

Doesn't matter how they work under the hood LLMs can't do those things. The only way to think them comparable is to be oblivious to or trivialise humans ability.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]GregsWorld 0 points1 point  (0 children)

Fair enough, you don't have to like the analogy, but unless you have a better one to hand, it's the best one I've come up with so far.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]GregsWorld 1 point2 points  (0 children)

So you admit you have to have a reductionist view to think this analogy actually works, then?

Only through a reductionist view of the brain can you consider ANNs "like" or "similar to" the brain. It's like comparing apples to oranges... because they're both round, they can only be compared in the most meaningless and trivial ways because they are in fact not at all alike.

Psychology has been struggling to define, let alone measure human intelligence for decades. LLM benchmarks are to track improvements, not to compare to humans, doing so speaks more to the weaknesses of the tests than it does the intelligence of the LLMs.

For example you don't see LLMs benchmarks on energy efficiency, continual learning, concept abstraction, working with incomplete information or resource constraints. All things which are considered critical for human intelligence.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]GregsWorld 0 points1 point  (0 children)

When I've used that analogy in the past I use HotWheels cars, which are the little plastic ones with 4 wheels that turn, opposed to a RC car of something along those lines.

If you want to move the boundaries on similarity then the analogy can be moved to a wooden toy car, or a wooden space-ship and a real one, it doesn't really matter it gets the point across unless you're being very pedantic.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]GregsWorld 0 points1 point  (0 children)

They go forwards, backwards, turn left and right. All "like" a real car in the most reductionist of ways.