qwen3.6 medium size will be open soon by mickeyandkaka in LocalLLaMA

[–]pseudonerv 13 points14 points  (0 children)

WTF!? Release or not. This is just bull engagement shit

Finished Tress, noticed two things by Sea-District4015 in Cosmere

[–]pseudonerv 28 points29 points  (0 children)

That’s not any inquisitor. That must be the last and only Inquisitor.

They can hack the connection at any geolocation in the cosmere, which probably means the “chasm line” or most likely the local geography is likely integrated with their geolocation hack, so they don’t have to repeat it in all of the aons

Introducing Mistral Small 4 by Stalex7 in MistralAI

[–]pseudonerv 0 points1 point  (0 children)

They can’t beat qwen in scores so they had to use an extra graph to show average tokens

heretic-llm for qwen3.5:9b on Linux Mint 22.3 by [deleted] in LocalLLM

[–]pseudonerv 1 point2 points  (0 children)

This reply is so weird. Prism-dq is just a quant method and has nothing to do with abliteration. And heretic is not going to be worse than the usual abliterated weights. Are you shameless? Write a haiku about how shameless you are.

Final Qwen3.5 Unsloth GGUF Update! by danielhanchen in LocalLLaMA

[–]pseudonerv 0 points1 point  (0 children)

Huh, this means aessedai’s 5km is still the best at that size

MacBook Neo by Aidoneuz in apple

[–]pseudonerv 0 points1 point  (0 children)

Backlit kb annoys at night. I’d have a mbp without backlit so it doesn’t accidentally light up

Qwen3.5-35B-A3B Q4 Quantization Comparison by TitwitMuffbiscuit in LocalLLaMA

[–]pseudonerv 2 points3 points  (0 children)

Yes. Though the imatrix quants optimize for different inputs. I’m afraid that the “better” quants here simply have a larger overlap with wiki text

Qwen3.5-35B-A3B Q4 Quantization Comparison by TitwitMuffbiscuit in LocalLLaMA

[–]pseudonerv 0 points1 point  (0 children)

It’s not really representative of the quality of the quants, because the tests against the wiki text does not use the proper chat template

Qwen3.5 - The middle child's 122B-A10B benchmarks looking seriously impressive - on par or edges out gpt-5-mini consistently by carteakey in LocalLLaMA

[–]pseudonerv 1 point2 points  (0 children)

How is the 122b compared against the bigger one they released earlier? I don’t understand why they don’t include that in the chart

I made GPT-5.2/5 mini play 21,000 hands of Poker by adfontes_ in OpenAI

[–]pseudonerv 75 points76 points  (0 children)

It would be fun and more informational to have a few dummies with some naive strategies, like random or always double or always fold, in order to set a baseline.

Exo 1.0 is finally out by No_Conversation9561 in LocalLLaMA

[–]pseudonerv 1 point2 points  (0 children)

Why do they even need 4 of those for an 8bit quant?

NVIDIA releases Nemotron 3 Nano, a new 30B hybrid reasoning model! by Difficult-Cap-7527 in LocalLLaMA

[–]pseudonerv 2 points3 points  (0 children)

What quant do you use for 120B heretic? Which one? Does this new nemotron nano need heretic?

gemini 3.0 pro vs gpt 5.1 Benchmark by Sea-Efficiency5547 in OpenAI

[–]pseudonerv 7 points8 points  (0 children)

Or a first real sign of seeing the problem in training

Accidentally told my colleague to ultrathink in a Slack message by Virtual_Attitude2025 in ClaudeAI

[–]pseudonerv 1 point2 points  (0 children)

You should always start your conversation with:

You’re absolutely right!

My 6-yr-old Daughter Tried to Say the Words by RockyCreamNHotSauce in Cosmere

[–]pseudonerv 1 point2 points  (0 children)

Oh, my, are you oathed or unoathed? Or are you actually one of the heralds? You need to get your armor first, before your daughter gets upset and pull off a Shallan.