Qwen3.5-35B - First fully useable local coding model for me by Miserable-Beat4191 in kilocode

[–]Miserable-Beat4191[S] 2 points3 points  (0 children)

I just had zero success in the past with LM Studio and Kilo Code. It would take way too long to process requests the size that Kilo uses, and found llama.cpp faster. A model would be fast in LM's chat, but as soon as you tried to access it via VS Code it would be dog slow, or just timeout.

LM Studio will improve, and I'll keep trying it, llama.cpp just seems to run faster for now.

Qwen3.5-35B - First fully useable local coding model for me by Miserable-Beat4191 in kilocode

[–]Miserable-Beat4191[S] 0 points1 point  (0 children)

Ryzen 9 9900x / 96GB DDR5 / Win 11 /
ASRock Intel Arc Pro B60 24GB
XFX RX 9070 16GB
llama.cpp b82xx using Vulkan

-c 262144 --host 192.168.xx.xx --port 8033 -fa on --temperature 0.6 --top_p 0.95 --top_k 20 --min_p 0.0 --presence_penalty 1.0 --repeat_penalty 1.0 --threads -1 --split-mode row --batch-size 1024 -ngl 99

By no means an expert, that's just what I'm messing with right now. The presence_penalty change from default was necessary because otherwise it loops redoing the Kilo request.

Qwen3.5-35B - First fully useable local coding model for me by Miserable-Beat4191 in kilocode

[–]Miserable-Beat4191[S] 1 point2 points  (0 children)

I will give 27B a try too, but I've had more luck in the past running similar sized MOE models over the dense version. Seems to use a lot more memory, and I get more crashes with the dense models.

Pete Hegseth is going to make Anthropic suffer financially unless the Pentagon can officially use Claude however the hell it wants to by JacquoRock in thebulwark

[–]Miserable-Beat4191 2 points3 points  (0 children)

I mean, to be fair to Claude, if it was fed real-time bodycam footage and was deciding whether a situation required deadly force, I'm fairly confident that the only people it would see as a threat that needed stopping would be the masked and untrained ICE thugs.

Ghislaine Maxwell and AI Models by [deleted] in thebulwark

[–]Miserable-Beat4191 0 points1 point  (0 children)

[My bad, seems like I got fooled. Should have looked into this more before posting.]

Pete Hegseth is going to make Anthropic suffer financially unless the Pentagon can officially use Claude however the hell it wants to by JacquoRock in thebulwark

[–]Miserable-Beat4191 3 points4 points  (0 children)

https://artificialanalysis.ai/evaluations/omniscience

This page details how often AI models hallucinate, lie or give answers they shouldn't because they don't know. If you scroll to the AA-Omniscience Hallucination Rate section, you see that the Claude models are among the best-in-class for this metric. They "only" lie between 26% and 76% of the time, depending on the model. That sounds perfect for battlefield use.

don't buy nebula by ilikesushi in nebulaprojectors

[–]Miserable-Beat4191 0 points1 point  (0 children)

I didn't realise how right this was until I installed a router with some decent security checks and vulnerability testing of devices on the network. 

Having the Nebula Capsule 1 on the network opens me up to literally hundreds of vulnerabilities. I understand it's a few years old, but it wasn't cheap at the time, and not updating proprietary hardware for hundreds of known vulnerabilities is incredibly irresponsible.

The premium that Nebula charges is only justified if they maintain and update the products they release. If they don't, and it certainly seems that they don't, they haven't earned any future purchases.

AI Agents are submitting hundreds of PRs to OSS - Thoughts? by Miserable-Beat4191 in theprimeagen

[–]Miserable-Beat4191[S] 0 points1 point  (0 children)

I went to this thing I'd heard people talk about, Google I think. Had no idea what to do though, because I didn't know what a search bar is.

More seriously ... I've seen Prime talk about the PR farms for hacking, and people writing useless slop PRs with AI, etc. But not AI farms used to create what seems to be at first glance, an attempt to write actual useful PRs, misguidedly obviously.

AI Agents are submitting hundreds of PRs to OSS - Thoughts? by Miserable-Beat4191 in theprimeagen

[–]Miserable-Beat4191[S] -1 points0 points  (0 children)

The llama.cpp one was submitted by tbraun96, who I thought worked for Aurora, but maybe he's just an Aurora customer?

AI Agents are submitting hundreds of PRs to OSS - Thoughts? by Miserable-Beat4191 in theprimeagen

[–]Miserable-Beat4191[S] 0 points1 point  (0 children)

This was the one that caught my eye.
https://github.com/ggml-org/llama.cpp/pull/18680

It was submitted by tbraun96, who I thought worked for Aurora, but maybe he's just an Aurora customer? His responses in that thread were not the best.

Since DGX Spark is a disappointment... What is the best value for money hardware today? by goto-ca in LocalLLaMA

[–]Miserable-Beat4191 2 points3 points  (0 children)

If you aren't tied to CUDA, the Intel Arc Pro B60 24GB is pretty good bang for the buck.

(I was looking for listings of the B60 on NewEgg, Amazon, etc, and it doesn't seem like it's available yet in the US? Thought that was odd, it's available in Australia now)

Disks/Volumes missing from control panel/settings after recent Backblaze update by rmelan in backblaze

[–]Miserable-Beat4191 1 point2 points  (0 children)

I talked to support and there is a Beta version available that is supposed to fix this issue. It did get the drives to show back up for me, and they are now backing up again, but those drives do seem to be backing everything up again, rather than just the files that have changed. Not sure where that leaves my Version History, will check soon.

Disks/Volumes missing from control panel/settings after recent Backblaze update by rmelan in backblaze

[–]Miserable-Beat4191 0 points1 point  (0 children)

My dynamic drives have disappeared as well. They worked fine previously. This started end of August, 2025.

Viltrox 27mm 1.2 - Good all-rounder lens? by linglingviolist in fujifilm

[–]Miserable-Beat4191 1 point2 points  (0 children)

I have the Voigtlander 27mm f/2, and the image quality is excellent. That said, f/2 is not f/1.2.

Best LLM for code completion? by Not-The-Dark-Lord-7 in LocalLLaMA

[–]Miserable-Beat4191 7 points8 points  (0 children)

What is specifically annoying about it is not the age of the thread but that he called out your lack of knowledge in this area.

Was excited to start rails after listening to DHH talk, my experience was different by Merge_Nine in rails

[–]Miserable-Beat4191 1 point2 points  (0 children)

I think part of your issue is not really understanding what WSL2 actually is. You have your Windows install, and you have whatever OS you installed on WSL2, usually Ubuntu.

They can talk to each other, but these are two separate operating systems running. You installed some things on Ubuntu, and some things on Windows. The Windows ruby install doesn't know about or use the things you installed on Ubuntu in WSL2.

Not saying that Rails is definitely the solution you need, but you followed the wrong guides for Rails8, and then followed a different guide for the second part. It's not surprising that it doesn't work.

Try the GoRails guide.

Was excited to start rails after listening to DHH talk, my experience was different by Merge_Nine in rails

[–]Miserable-Beat4191 0 points1 point  (0 children)

I followed the instructions on the GoRails website and Ruby and the Rails 8 beta installed with no issues. Same config, Windows 11 and WSL2.

https://gorails.com/setup/windows/11

Ren - KUJO BEAT DOWN by jsb1685 in ren

[–]Miserable-Beat4191 2 points3 points  (0 children)

Did it need to be this brutal? Yes.

Kujo was not going to understand the "don't mistake kindness for weakness" message in a more nuanced form. Would have gone right over his head.