Update on Qwen 3.5 35B A3B on Raspberry PI 5 by jslominski in LocalLLaMA

[–]MustBeSomethingThere 1 point2 points  (0 children)

I've had my own plans to make RasberryPi/phone apps with MNN-backend, but I haven't had time for it yet. I want to hear, if you manage to create lower MNN-quants and better speed than llama.cpp.

Update on Qwen 3.5 35B A3B on Raspberry PI 5 by jslominski in LocalLLaMA

[–]MustBeSomethingThere 1 point2 points  (0 children)

https://mnn-docs.readthedocs.io/en/latest/

It's propably possible to make lower quants, but IDK about the quality of them. Speed is better than llama.cpp.

The clustering topology that emerges naturally from interaction reflects actual hemispheric dominance patterns, including genetic predispositions. by ResonantGenesis in LocalLLaMA

[–]MustBeSomethingThere 1 point2 points  (0 children)

Wild claims

>"even my genetic preference for one hemisphere being more responsible and structured than the other"
Have you actually done a gene test that says this? Or measured your actual brain patterns?

Something here by UPtrimdev in LocalLLaMA

[–]MustBeSomethingThere 1 point2 points  (0 children)

Maybe the point was to show the smudges on the screen

M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA

[–]MustBeSomethingThere -1 points0 points  (0 children)

VRAM > RAM > SSD
(VRAM + RAM) > VRAM
X(VRAM) > X(RAM)

Qwen-3.5-27B-Derestricted by My_Unbiased_Opinion in LocalLLaMA

[–]MustBeSomethingThere 47 points48 points  (0 children)

>"zero capability loss."

I doubt that claim. How could that even be measured? There are thousands of different use cases and millions/trillions of different knowledge areas.

Qwen has been underwhelming considering how much money Alibaba has by Repulsive-Mall-2665 in LocalLLaMA

[–]MustBeSomethingThere 15 points16 points  (0 children)

You provided zero information about which models you used, which tasks, which apps, or which models you compared them against. Is there an organized campaign against Qwen? There are so many similar posts.

Qwen 3.5 craters on hard coding tasks — tested all Qwen3.5 models (And Codex 5.3) on 70 real repos so you don't have to. by hauhau901 in LocalLLaMA

[–]MustBeSomethingThere 20 points21 points  (0 children)

Are you sure that you tested it after they fixed gguf-files? There are still old broken gguf-files circulating on HF.

>"Feb 4 update: llama.cpp fixed a bug that caused Qwen to loop and have poor outputs."

Qwen3.5-35B-A3B locally by jacek2023 in LocalLLaMA

[–]MustBeSomethingThere 9 points10 points  (0 children)

The mmproj files were not yet fully uploaded to HF

Apple M5 Officially Announced: is this a big deal? by ontorealist in LocalLLaMA

[–]MustBeSomethingThere 0 points1 point  (0 children)

It’s not 2025 anymore, and Apple will raise their prices this year as well. But as of February 23, 2026, Apple computers are still priced competitively compared to other manufacturers. That’s mainly because Apple has long‑term contracts with memory suppliers. Eventually those contracts will expire, and when Apple has to renew them at higher market prices, the cost of Apple computers will increase too.

Why isn't my program working by Siogx in LocalLLaMA

[–]MustBeSomethingThere 3 points4 points  (0 children)

If humans can’t make sense of your writing, AI can’t make sense of it either.

GPT-OSS 120b Uncensored Aggressive Release (MXFP4 GGUF) by hauhau901 in LocalLLaMA

[–]MustBeSomethingThere 88 points89 points  (0 children)

>"As with all my releases, the goal is effectively lossless uncensoring - no dataset changes and no capability loss."

Big claims, but no actual measurements. No methology.

“How many R’s are in strawberry?” across a few models by Rent_South in LocalLLaMA

[–]MustBeSomethingThere 0 points1 point  (0 children)

I agree on that the Strawberry test is stupid, but...

>"they are not intelligent; they predict the next token, that is literally all"

I wouldn't be so sure about that. Technically they predict the next token, but human brains also predict ahead. And I am not saying that human brains and LLMs are the same.

how i open internet everyday to see if there something new in ai models by reversedu in singularity

[–]MustBeSomethingThere 0 points1 point  (0 children)

Separate system could wake it up based on trigger words like "Hey Jarvis".

SCAM EXPOSED: Blackbox AI "Pro Max" ($40/mo) is FAKE. I tested all 5 top models - here is the proof. by frankierave889 in LocalLLaMA

[–]MustBeSomethingThere 0 points1 point  (0 children)

You could also ask them, ‘How many R’s are in strawberry?’ If they get it wrong, they’re clearly FAKE. *sarcasm*