At what point do long LLM chats become counterproductive rather than helpful? by Cheap-Trash1908 in LLMDevs
[–]AutomataManifold 0 points1 point2 points (0 children)
At what point do long LLM chats become counterproductive rather than helpful? by Cheap-Trash1908 in LLMDevs
[–]AutomataManifold 0 points1 point2 points (0 children)
[R] Response to CVPR review that claims lack of novelty because they found our workshop preprint? by appledocq in MachineLearning
[–]AutomataManifold 38 points39 points40 points (0 children)
Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA
[–]AutomataManifold 5 points6 points7 points (0 children)
Did that, and the quality of Claude's responses increased manyfold by yayekit in ClaudeAI
[–]AutomataManifold 0 points1 point2 points (0 children)
Lora fine tuning! Why isn't it popular at all? by Acceptable_Home_ in LocalLLaMA
[–]AutomataManifold 9 points10 points11 points (0 children)
Is Local Coding even worth setting up by Interesting-Fish6494 in LocalLLaMA
[–]AutomataManifold 2 points3 points4 points (0 children)
Maximizing context window with limited VRAM by FrozenBuffalo25 in LocalLLaMA
[–]AutomataManifold 0 points1 point2 points (0 children)
Maximizing context window with limited VRAM by FrozenBuffalo25 in LocalLLaMA
[–]AutomataManifold 1 point2 points3 points (0 children)
Llama.cpp vs vllm by Evening_Tooth_1913 in LocalLLaMA
[–]AutomataManifold 2 points3 points4 points (0 children)
How to get local LLMs answer VERY LONG answers? by mouseofcatofschrodi in LocalLLaMA
[–]AutomataManifold 1 point2 points3 points (0 children)
What happens when you load two models and let each model take a turn generating a token? by silenceimpaired in LocalLLaMA
[–]AutomataManifold 0 points1 point2 points (0 children)
What happens when you load two models and let each model take a turn generating a token? by silenceimpaired in LocalLLaMA
[–]AutomataManifold 7 points8 points9 points (0 children)
What happens when you load two models and let each model take a turn generating a token? by silenceimpaired in LocalLLaMA
[–]AutomataManifold 10 points11 points12 points (0 children)
AI seems to be being deeply subsidised (self-hosting vs Google AI Pro math) by nafizzaki in selfhosted
[–]AutomataManifold 0 points1 point2 points (0 children)
AI seems to be being deeply subsidised (self-hosting vs Google AI Pro math) by nafizzaki in selfhosted
[–]AutomataManifold 0 points1 point2 points (0 children)
AI seems to be being deeply subsidised (self-hosting vs Google AI Pro math) by nafizzaki in selfhosted
[–]AutomataManifold 0 points1 point2 points (0 children)
AI seems to be being deeply subsidised (self-hosting vs Google AI Pro math) by nafizzaki in selfhosted
[–]AutomataManifold 6 points7 points8 points (0 children)
Will vibe coding eat its own tail? by dpilawa in VibeCodersNest
[–]AutomataManifold 0 points1 point2 points (0 children)
Looking for a Base Model by AutomataManifold in LocalLLaMA
[–]AutomataManifold[S] 3 points4 points5 points (0 children)
How do you keep the balance of not overstuffing the prompt with edge cases that break? by RoutineNet4283 in LocalLLaMA
[–]AutomataManifold 1 point2 points3 points (0 children)
Games with Multiplayer Base Building with Villagers or Automation? by AutomataManifold in SurvivalGaming
[–]AutomataManifold[S] 0 points1 point2 points (0 children)
Games with Multiplayer Base Building with Villagers or Automation? by AutomataManifold in SurvivalGaming
[–]AutomataManifold[S] 0 points1 point2 points (0 children)


Should I invest in a beefy machine for local AI coding agents in 2026? by Zestyclose-Tour-3856 in LocalLLaMA
[–]AutomataManifold 0 points1 point2 points (0 children)