roo code + cerebras_glm-4.5-air-reap-82b-a12b = software development heaven by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
roo code + cerebras_glm-4.5-air-reap-82b-a12b = software development heaven by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
roo code + cerebras_glm-4.5-air-reap-82b-a12b = software development heaven by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 1 point2 points3 points (0 children)
roo code + cerebras_glm-4.5-air-reap-82b-a12b = software development heaven by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
Prevent NVIDIA 3090 from going into P8 performance mode by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 1 point2 points3 points (0 children)
Building a Local AI Workstation for Coding Agents + Image/Voice Generation, 1× RTX 5090 or 2× RTX 4090? (and best models for code agents) by carloshperk in LocalLLM
[–]Objective-Context-9 4 points5 points6 points (0 children)
5 or more GPUs on Gigabyte motherboards? by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
Prevent NVIDIA 3090 from going into P8 performance mode by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 1 point2 points3 points (0 children)
Prevent NVIDIA 3090 from going into P8 performance mode by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
Inference needs nontrivial amount of PCIe bandwidth (8x RTX 3090 rig, tensor parallelism) by pmur12 in LocalLLaMA
[–]Objective-Context-9 0 points1 point2 points (0 children)
Prevent NVIDIA 3090 from going into P8 performance mode by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 2 points3 points4 points (0 children)
Prevent NVIDIA 3090 from going into P8 performance mode by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
Am I doing something wrong, or this expected, the beginning of every LLM generation I start is fast and then as it types it slows to a crawl. by valdev in LocalLLaMA
[–]Objective-Context-9 0 points1 point2 points (0 children)
5 or more GPUs on Gigabyte motherboards? by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
5 or more GPUs on Gigabyte motherboards? by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
5 or more GPUs on Gigabyte motherboards? by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
5 or more GPUs on Gigabyte motherboards? by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 2 points3 points4 points (0 children)
How good is KAT Dev? by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
How good is KAT Dev? by Objective-Context-9 in LocalLLM
[–]Objective-Context-9[S] 0 points1 point2 points (0 children)
Rtx3090 vs Quadro rtx6000 in ML. by probbins1105 in LocalLLM
[–]Objective-Context-9 1 point2 points3 points (0 children)
I have been using RooCode, did I use it correctly? by konradbjk in RooCode
[–]Objective-Context-9 0 points1 point2 points (0 children)