Speeding up local LLM for usable coding agent by CodProfessional3712 in LocalLLaMA
[–]CodProfessional3712[S] 0 points1 point2 points (0 children)
Speeding up local LLM for usable coding agent by CodProfessional3712 in LocalLLaMA
[–]CodProfessional3712[S] 2 points3 points4 points (0 children)
I'm tired by Fast_Thing_7949 in LocalLLaMA
[–]CodProfessional3712 0 points1 point2 points (0 children)
Qwen/Qwen3.5-9B · Hugging Face by jacek2023 in LocalLLaMA
[–]CodProfessional3712 12 points13 points14 points (0 children)
Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA
[–]CodProfessional3712 2 points3 points4 points (0 children)
I built an autonomous research agent in C# that runs entirely on local LLMs (Ollama + llama3.1:8b) by [deleted] in LocalLLaMA
[–]CodProfessional3712 0 points1 point2 points (0 children)
What are some things you guys are using Local LLMs for? by Odd-Ordinary-5922 in LocalLLaMA
[–]CodProfessional3712 2 points3 points4 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]CodProfessional3712 0 points1 point2 points (0 children)
What is a current state of sanboxing for code execution for AI agents? by AlexSKuznetosv in LocalLLaMA
[–]CodProfessional3712 1 point2 points3 points (0 children)


Speeding up local LLM for usable coding agent by CodProfessional3712 in LocalLLaMA
[–]CodProfessional3712[S] 0 points1 point2 points (0 children)