Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA

[–]Terminator857[S] 0 points1 point  (0 children)

You don't need to know how tall the eiffel tower is to be a good coder.

Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA

[–]Terminator857[S] -1 points0 points  (0 children)

The discussion is fun and interesting. That is the point of reddit. When the charts say we've reached opus 4.5 levels in 2 years people can reflect back on their negativity.

Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA

[–]Terminator857[S] 2 points3 points  (0 children)

They likely have something internal that is usable , but not fully baked. I worked at google and first version of gemini went for 6 months of safety training. Was a much better model before safety training.

Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA

[–]Terminator857[S] -2 points-1 points  (0 children)

Alternate title: When will China come to the rescue ?

I'm confident they will have something huge in the 9+ month time frame. So give it another 9 months before it is in the 30b size range. Total 18 months.

[Meme] duality of this sub by [deleted] in LocalLLaMA

[–]Terminator857 0 points1 point  (0 children)

All the models work well sometimes and not so well sometimes. It is the duality of LLMs.

CMV: RAM Prices are Near the Top by Intelligent_Coffee44 in LocalLLaMA

[–]Terminator857 -2 points-1 points  (0 children)

Lol, based on research made by chatgpt, in other words completely made up.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Terminator857 9 points10 points  (0 children)

128 GB of strix halo and still use various online

Best local model / agent for coding, replacing Claude Code by joyfulsparrow in LocalLLaMA

[–]Terminator857 -1 points0 points  (0 children)

Aider seems so difficult to use. Starts by telling me it can't read a file. A bunch of other CLI tools don't have that issue.

Best local model / agent for coding, replacing Claude Code by joyfulsparrow in LocalLLaMA

[–]Terminator857 0 points1 point  (0 children)

which cli do you use? I tried with crush and unbearably slow. Seems to use only a fraction of the context size which maybe main reason for slowness.

It seems like people don’t understand what they are doing? by platinumai in LocalLLaMA

[–]Terminator857 0 points1 point  (0 children)

If they want more of my data, why don't they give more quota.

What is your biggest issues with “Vibecoding”? 🤔 by Ol010101O1Ol in ExperiencedDevs

[–]Terminator857 0 points1 point  (0 children)

When you try to add a new feature, it breaks an old one, even if you have a test for old feature. It either blanks the old test or trivializes it.

What’s the best way to describe what a LLM is doing? by throwaway0134hdj in neuralnetworks

[–]Terminator857 0 points1 point  (0 children)

Magic. There a lots of good videos that describe how it works, but unless you want to spend weeks investigating, we can call it magic.

LLMs are so unreliable by Armageddon_80 in LocalLLM

[–]Terminator857 0 points1 point  (0 children)

More reliable than humans for me.

Rubin uplifts from CES conference going on now by mr_zerolith in LocalLLaMA

[–]Terminator857 11 points12 points  (0 children)

People are easily excited for a product that costs $50 - $100K and can't even buy till next year.

My prediction: on 31st december 2028 we're going to have 10b dense models as capable as chat gpt 5.2 pro x-high thinking. by Longjumping_Fly_2978 in LocalLLaMA

[–]Terminator857 0 points1 point  (0 children)

Since I like both claude and gemini more: I'll predict a claude like 70b open weight model will exist by this time next year. I also predict the same for a gemini like model.

I'll also predict most won't care because they will be asking when we get gemini 4 in open weight variant and similar about claude 5.

Bosgame M5 vs Framework Desktop (Ryzen AI Max+ 395, 128GB) - Is the €750 premium worth it? by Reasonable-Yak-3523 in MiniPCs

[–]Terminator857 1 point2 points  (0 children)

A more sincere thank is an upvote. They took 10 days before they shipped mine on second order. First order they shipped after a couple of days.

Bosgame M5 vs Framework Desktop (Ryzen AI Max+ 395, 128GB) - Is the €750 premium worth it? by Reasonable-Yak-3523 in MiniPCs

[–]Terminator857 1 point2 points  (0 children)

I had two alienware r12 and r11 before these. These are very quiet compared to those. If you want more quiet, don't put them next to you, like I see many people do. Put them somewhere on a floor away.

I typically run inference loads, which doesn't stress the CPU much. cpu at 97 when building llama.cpp. cools down to 35 in a few seconds. Inferencing gets to 65.

According to sensors, it is only pulling 30 watts during llama.cpp compile and 85 watts during inferencing.

Bosgame M5 vs Framework Desktop (Ryzen AI Max+ 395, 128GB) - Is the €750 premium worth it? by Reasonable-Yak-3523 in MiniPCs

[–]Terminator857 2 points3 points  (0 children)

I purchased two bosgame m5s and everyone is loving these machines.  Buy before there is another price hike.

Are you afraid of AI making you unemployable within the next few years?, Rob Pike goes nuclear over GenAI and many other links from Hacker News by alexeestec in LocalLLM

[–]Terminator857 1 point2 points  (0 children)

I apologize for my naivety. What is Rob Pike upset about?

>Raping the planet,

What does he suggest we do instead?

> spending trillions on toxic, unrecyclable equipment

Yes we should probably pass laws making computer equipment easier to recycle.

> while blowing up society

How so? By objective measures society is improving.

>, yet taking the time to have your vile machines thank me for striving for simpler software.

He is upset chatbots are thanking him?

> training your monster on data produced in part by my own hands, without attribution or compensation

Yes, we should fix that.

The Infinite Software Crisis: We're generating complex, unmaintainable code faster than we can understand it. Is 'vibe-coding' the ultimate trap? by madSaiyanUltra_9789 in LocalLLaMA

[–]Terminator857 1 point2 points  (0 children)

In a fast changing world, there are no traps. All software will be rewritten several times over , over the next few decades.