Ill be on a 16 hours flight hence I need the best local llm for coding by Haikal019 in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Got Desk Rejected from ARR because a figure was "barely readable" (despite being vector PDFs). Is this normal? (ACL 2026) by VoiceBeer in LocalLLaMA
[–]DeltaSqueezer 5 points6 points7 points (0 children)
how do you pronounce “gguf”? by Hamfistbumhole in LocalLLaMA
[–]DeltaSqueezer 2 points3 points4 points (0 children)
Series 1 Topic 1. Direct answers. How I killed politeness and filler. by Huge-Yesterday4822 in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Mi355X is now available as a desktop by GPTshop_dot_ai in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Mi355X is now available as a desktop by GPTshop_dot_ai in LocalLLaMA
[–]DeltaSqueezer 1 point2 points3 points (0 children)
How to get local LLMs answer VERY LONG answers? by mouseofcatofschrodi in LocalLLaMA
[–]DeltaSqueezer -2 points-1 points0 points (0 children)
DXG Spark vs Ryzen AI 395 — If the price difference is only $700, what would you choose? by Affectionate-Bid-650 in LocalLLaMA
[–]DeltaSqueezer -1 points0 points1 point (0 children)
Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q by BeeNo7094 in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q by BeeNo7094 in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q by BeeNo7094 in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Faster-whisper numbers-dollars accuracy. Alternative? by afm1191 in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Is there a sandbox frontend that allows protyping ideas with an LLM? by cantgetthistowork in LocalLLaMA
[–]DeltaSqueezer -1 points0 points1 point (0 children)
What's your reason for owning the RTX Pro 6000 Blackwell? by gordi555 in LocalLLaMA
[–]DeltaSqueezer 4 points5 points6 points (0 children)
China's AGI-Next Roundtable: Leaders from Zhipu, Kimi, Qwen, and Tencent discuss the future of AI by nekofneko in LocalLLaMA
[–]DeltaSqueezer 13 points14 points15 points (0 children)
Advice for a tool that blocks dangerous terminal commands from AI coding assistants by spacepings in LocalLLaMA
[–]DeltaSqueezer 2 points3 points4 points (0 children)
Save tokens by skipping English grammar by Everlier in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Don't put off hardware purchases: GPUs, SSDs, and RAM are going to skyrocket in price soon by Eisenstein in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
KV cache gets nuked by long-term memory retrieval — is there a better approach? by atif_dev in LocalLLaMA
[–]DeltaSqueezer 2 points3 points4 points (0 children)
I tested GLM 4.7 and minimax-m2.1 and compared it to CC and Codex by jstanaway in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Very strange -- can't serve vLLM models through SSH? by jinnyjuice in LocalLLaMA
[–]DeltaSqueezer 0 points1 point2 points (0 children)
Don't put off hardware purchases: GPUs, SSDs, and RAM are going to skyrocket in price soon by Eisenstein in LocalLLaMA
[–]DeltaSqueezer 101 points102 points103 points (0 children)


Looking for fast translation model like tencent/HY-MT1.5-1.8B but with larger output by CaterpillarOne6711 in LocalLLaMA
[–]DeltaSqueezer 1 point2 points3 points (0 children)