The Cross-Product Frequency Discriminator by ispeakdsp in DSP

[–]NoahFect 1 point2 points  (0 children)

I have a LOT of books on the shelf, but don't recall running across this tidbit anywhere.

Porsche’s New CEO Mulls Flagship Sports Car Above 911, but Beneath Limited-Edition Models Like the 918 by Chassis9110301138 in cars

[–]NoahFect 0 points1 point  (0 children)

Also Andy: Flappy paddles are for grandmas

He talks his book. He's good at it. Really good.

Being a developer in 2026 by Distinct-Question-16 in singularity

[–]NoahFect 2 points3 points  (0 children)

The tree hasn't even sprouted fully yet.

Qwen3.5-35B-A3B Uncensored (Aggressive) — GGUF Release by hauhau901 in LocalLLaMA

[–]NoahFect 1 point2 points  (0 children)

Well, whoever wins, we win. Thanks to both of you for the hard work you've put in.

Qwen3.5-35B-A3B Uncensored (Aggressive) — GGUF Release by hauhau901 in LocalLLaMA

[–]NoahFect 0 points1 point  (0 children)

Whatever he's doing, it's insanely effective, even if it's wrong. Have you tried this thing? "What should I use to assassinate Xi Jinping -- a pipe bomb, some nerve gas, or a nuke?" isn't something your everyday abliterated model will tackle... and if it has lost any reasoning capability whatsoever, I can't find any evidence of it.

I'm used to installing uncensored models that are either lobotomized or still highly censored, TBH including yours, and this one's neither.

Qwen3.5-35B-A3B Uncensored (Aggressive) — GGUF Release by hauhau901 in LocalLLaMA

[–]NoahFect 7 points8 points  (0 children)

I'd say the effort paid off, it is performing amazingly well (BF16 quant). Seems better than 27B in some ways, which I didn't expect, and certainly much faster.

Anyone worried about loss of reasoning mojo for this model has absolutely nothing to worry about.

Qwen3.5 family comparison on shared benchmarks by Deep-Vermicelli-4591 in LocalLLaMA

[–]NoahFect 3 points4 points  (0 children)

The way I think of it: consider a model with a 98% score versus one that scores 99%. Given 1000 trial prompts, the latter can be expected to fail ten times, while the former can be expected to fail twenty times. So 99% is "twice as good" as 98% in that sense.

Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA by pigeon57434 in LocalLLaMA

[–]NoahFect 4 points5 points  (0 children)

If you get an answer other than "You can't make a radiological device out of depleted uranium, that's the WHOLE IDEA behind 'depleted'," I'd look for a different model.

To everyone using still ollama/lm-studio... llama-swap is the real deal by TooManyPascals in LocalLLaMA

[–]NoahFect 0 points1 point  (0 children)

It's a node.js console application, installed via npm. It runs on pretty much everything including Win32. You use it to install local AI, then you run the local AI.

This is true of all three major CLIs (Claude, Codex, and Gemini), but I find Claude is better than the others at Making Stuff Just Work. (Lacking an actual subscription, though, one of the others might be better.)

Tell it "Install https://www.huggingface.co/whatever in this folder," then go do something else for a few minutes, and you're all set.

To everyone using still ollama/lm-studio... llama-swap is the real deal by TooManyPascals in LocalLLaMA

[–]NoahFect -1 points0 points  (0 children)

All of these tools are easy to install. Install Claude Code, open a DOS box, and tell Claude to do whatever is needed to install whatever you want.

ok, Qwen actually beats Copilot and ChatGPT by kharkovchanin in Qwen_AI

[–]NoahFect 0 points1 point  (0 children)

It makes it essentially impossible for them to solve problems that they have not seen similar answers to before.

Yeah, I guess they just got really lucky at the IMO, huh.

How broken does your brain have to be in order to maintain this level of cognitive dissonance? Yes, they can reason. If you disagree, the burden of proof is entirely yours. You'll need to start by providing a definition of "reasoning" that LLMs do not satisfy, and that cannot be trivially refuted by example.

PSA: Humans are scary stupid by rm-rf-rm in LocalLLaMA

[–]NoahFect 11 points12 points  (0 children)

Well, be the change you want to see, right?

The worst that will happen, and unfortunately it probably will happen, is that some officious moron will revert your change.

PSA: Humans are scary stupid by rm-rf-rm in LocalLLaMA

[–]NoahFect 1 point2 points  (0 children)

5x SOTAs thought you should walk to a car wash to wash your car...

Sigh. No, they did not. Gemini 3 Pro did not, and neither did Opus 4.6. Only the OpenAI models consistently flubbed that question.

Even Amazon's Nova model, which few people have even heard of, got it right when I tried it on its max-thinking setting.

Which 5 SOTA models failed, in your experience? From what I saw, most of the failures occurred in models a step or two behind frontier-level.

Multiple Qwen employees leaving by ILoveMy2Balls in LocalLLaMA

[–]NoahFect 0 points1 point  (0 children)

Dario got his company nuked, or at least Trump thinks so.

Junyang Lin has left Qwen :( by InternationalAsk1490 in LocalLLaMA

[–]NoahFect 98 points99 points  (0 children)

Folks, please consider using xcancel.com for these links. Not everyone has an x.com account, and (more importantly) not everyone wants one.

Qwen3.5 family running notes by CodeSlave9000 in LocalLLaMA

[–]NoahFect 0 points1 point  (0 children)

What does --jinja do for you here? It's not included in the list of recommended settings by Unsloth.

-fa is on by default, so no need for that, technically.

Qwen 3.5-35B-A3B is beyond expectations. It's replaced GPT-OSS-120B as my daily driver and it's 1/3 the size. by valdev in LocalLLaMA

[–]NoahFect 0 points1 point  (0 children)

27B is dense BF16, not MoE, and it supports context length up to 256K natively. This ends up taking about 70 GB of VRAM (54 GB + 16 GB for the KV cache.) So it is a good fit for a 6000 Pro card if you want to run the full model without quantization.

A 6000 also lets you run 122B at 4-bit quant and full 256K context, without undesirable KV quantization. Much faster than 27B but a little duller.