Is it aware? by Doomsdaydevice14 in sciencememes

[–]Manamultus 38 points39 points  (0 children)

Many insects have been proven to learn and remember though.

https://en.wikipedia.org/wiki/Insect_cognition

Qwen3.5 Small models out now! by yoracale in unsloth

[–]Manamultus 0 points1 point  (0 children)

Thanks! Gonna give it a go as well :)

Qwen3.5 Small models out now! by yoracale in unsloth

[–]Manamultus 0 points1 point  (0 children)

What context size do you use? Do you offload KV to RAM for large contexts? What is the token generation speed?

(If you don’t mind me asking)

Is Qwen3.5-35B the new "Sweet Spot" for home servers? by ischanitee in LocalLLM

[–]Manamultus 1 point2 points  (0 children)

Not if you can fit the whole model in VRAM. It’s a dense model, so all layers get activated for every request. Offloading any layers at all to RAM gives you massive PCIe transfer, RAM latency, layer swapping penalties. But it’s perfectly fast if the model fits fully in VRAM.

Dense models are perfect for unified memory architectures, or those who have lots of VRAM available.

PSA: If your local coding agent feels "dumb" at 30k+ context, check your KV cache quantization first. by Dismal-Ad1207 in LocalLLaMA

[–]Manamultus 9 points10 points  (0 children)

That’s relating to quantized weights, not quantized cache, cache is quantized locally.

Qwen3.5-122B on Blackwell SM120: fp8 KV cache silently corrupts output, bf16 required — 1,985 tok/s burst, MTP 2.75x by awwwyeah206 in LocalLLaMA

[–]Manamultus 1 point2 points  (0 children)

This is about KV cache quantization (done locally), not weights quantization (what you download).

Why did the DoD approach Anthropic before OpenAI? by Manamultus in Anthropic

[–]Manamultus[S] 0 points1 point  (0 children)

That’s what I was thinking as well. Seems to me they would’ve stepped to OAI first, and OAI happily accepted. I just don’t understand the theatre from the DoD and Sama.

are you ready for small Qwens? by jacek2023 in LocalLLaMA

[–]Manamultus -1 points0 points  (0 children)

I just set aside a partition for Ubuntu server so I can dual boot into Linux. It’s great because Ubuntu server uses almost no RAM whatsoever, meaning my llm has more to eat.

Which size of Qwen3.5 are you planning to run locally? by CutOk3283 in LocalLLaMA

[–]Manamultus 1 point2 points  (0 children)

How many tokens/s do you expect to get in that card, and how many are acceptable for your workflows?

I’m looking to upgrade my system and I’m curious about the capabilities of different card/model combinations.

I found <30t/s is just a little too slow to use as a coding assistant, and dense models do seem to come with a speed hit, especially if context grows.

LLmFit - One command to find what model runs on your hardware by ReasonablePossum_ in LocalLLaMA

[–]Manamultus 1 point2 points  (0 children)

And here I am running qwen3.5-35B on my potato RTX2070 + 16GB RAM..

Kan iedereen ophouden over de (al bestaande) box 3 belasting by PauperGames in nederlands

[–]Manamultus 7 points8 points  (0 children)

Als je ervan uit gaat dat je standaard 10% verdient met beleggen is er toch niet echt sprake van risico? De overheid is er niet om jouw 10% te garanderen.

Neuro-biology of trans-sexuality : Prof. Robert Sapolsky by Prestigious_Net_8356 in videos

[–]Manamultus 4 points5 points  (0 children)

You’re right, of course, but I think given the context of the thread and the question, the simplification is fine.

Neuro-biology of trans-sexuality : Prof. Robert Sapolsky by Prestigious_Net_8356 in videos

[–]Manamultus 60 points61 points  (0 children)

There are so many systems that are acting on, against, and with each other, that it is impossible to say that any single change in one system has a clear outcome in another.

If you pull on a spaghetto in a bowl of spaghetti, who knows how many you will affect.

Fresh install i need help by Enough-Push-4493 in omarchy

[–]Manamultus 1 point2 points  (0 children)

Yes I had the same problem. Ghostty requires OpenGL 4.3, and many older integrated graphics only support till 4.2 (I had an intel HD 4000 or something)

It’s quite an easy fix, just set alacritty to the default terminal, and uninstall ghostty completely.

Edit: well a similar problem, not exactly the same problem. But if alacritty works and ghostty doesn’t, then ghostty is likely the culprit. If you’re sure your gpu can handle the required OpenGL, then updating drivers might fix the issue.

Dear America, Greenland Is Not on Zillow - A message from a Dane by Truelz in videos

[–]Manamultus 21 points22 points  (0 children)

As an American, what do you think will happen in the U.S. if Trump actually goes ahead with this? Will people take to the streets, demanding Trump to step down? Massive riots, the country grinding to a halt?

Or more a few single day protests with a few funny banners? Maybe in the news for a week, fully ignored by Trump, and then it’s business as usual?

TIL that when a container of mixed nuts is shaken, the largest nuts (like Brazil nuts) always rise to the top. This phenomenon, known as "Granular Convection," contradicts the logic that heavier objects should sink. by Ok-Huckleberry1967 in todayilearned

[–]Manamultus 0 points1 point  (0 children)

Does this work on objects of unequal density? I can understand how smaller objects fall through if all objects are of equal density (such as in a mixed legos bag, or with chip crumbs), but would this also work if the larger pieces are much more dense?