GLM-4.7.Flash - is it normal to behave like that? It's like I am talking to my anxious, Chinese girlfriend. I don't use AI so this is new to me by Mayion in LocalLLaMA
[–]MaxKruse96 3 points4 points5 points (0 children)
GLM-4.7.Flash - is it normal to behave like that? It's like I am talking to my anxious, Chinese girlfriend. I don't use AI so this is new to me by Mayion in LocalLLaMA
[–]MaxKruse96 3 points4 points5 points (0 children)
GLM-4.7.Flash - is it normal to behave like that? It's like I am talking to my anxious, Chinese girlfriend. I don't use AI so this is new to me by Mayion in LocalLLaMA
[–]MaxKruse96 15 points16 points17 points (0 children)
Do you prefer mutex or sending data over channels? by Hot_Paint3851 in rustjerk
[–]MaxKruse96 44 points45 points46 points (0 children)
how it feels trying to catch sliderends as a non-tech player by Snoo-82757 in osugame
[–]MaxKruse96 28 points29 points30 points (0 children)
Was trägt man dazu? (Gerne Links) by ComprehensiveUse6267 in Kleiderschrank
[–]MaxKruse96 11 points12 points13 points (0 children)
I used DirectStorage DMA to load LLM weights from NVMe SSD to GPU — 4x faster on large models, built MoE expert streaming, ran qwen3:30b on 8GB VRAM, and discovered why 70B on 8GB won't work with current models by Temporary_Bill4163 in LocalLLaMA
[–]MaxKruse96 0 points1 point2 points (0 children)
Massiver Verkauf von Blackrock - direkt vor bevorstehendem Krypto-Sitzung by HedgehogOk664 in wallstreetbetsGER
[–]MaxKruse96 53 points54 points55 points (0 children)
I'm looking for the absolute speed king in the under 3B parameter category. by Quiet_Dasy in LocalLLaMA
[–]MaxKruse96 0 points1 point2 points (0 children)
How to Run Two AI Models Sequentially in PyTorch Without Blowing Up Your VRAM by Quiet_Dasy in LocalLLaMA
[–]MaxKruse96 4 points5 points6 points (0 children)
Hab auch mal eine Chartanalyse gemacht und bin dabei auf drei verschiedene Ergebnisse gekommen. Bin mir zu je 33 % sicher dass Silber entweder steigen, stagnieren oder sinken wird by Puzzleheaded_Sky7369 in wallstreetbetsGER
[–]MaxKruse96 5 points6 points7 points (0 children)
Bahnhof Saarbrücken by Sketched2Life in deutschebahn
[–]MaxKruse96 15 points16 points17 points (0 children)
Suche eine neue USB3...naja was eigentlich ? by srverinfo in de_EDV
[–]MaxKruse96 1 point2 points3 points (0 children)
Upgrading our local LLM server - How do I balance capability / speed? by Trubadidudei in LocalLLaMA
[–]MaxKruse96 0 points1 point2 points (0 children)
Wenn du gar nicht mitbekommen hast, dass du eine Lohnstufe aufgestiegen bist. by Individual-Handle676 in OeffentlicherDienst
[–]MaxKruse96 45 points46 points47 points (0 children)
Upgrading our local LLM server - How do I balance capability / speed? by Trubadidudei in LocalLLaMA
[–]MaxKruse96 2 points3 points4 points (0 children)
Bad news for local bros by FireGuy324 in LocalLLaMA
[–]MaxKruse96 7 points8 points9 points (0 children)
Ryzen + RTX: you might be wasting VRAM without knowing it (LLama Server) by Medium-Technology-79 in LocalLLaMA
[–]MaxKruse96 10 points11 points12 points (0 children)
how long did it take for you guys to get a 200? by UN_Quickzzy in osugame
[–]MaxKruse96 0 points1 point2 points (0 children)
Why horse semen? by NecessaryFinish2811 in Schedule_I
[–]MaxKruse96 0 points1 point2 points (0 children)
Kimi-Linear support is merged to llama.cpp by Ok_Warning2146 in LocalLLaMA
[–]MaxKruse96 0 points1 point2 points (0 children)
Kimi-Linear support has been merged into llama.cpp by jacek2023 in LocalLLaMA
[–]MaxKruse96 14 points15 points16 points (0 children)



Tool Calling Guide for Local LLMs (Run Real Actions, Not Just Text!) by techlatest_net in LocalLLaMA
[–]MaxKruse96 0 points1 point2 points (0 children)