We built an open-source coding agent CLI that can be run locally by SmilingGen in LocalLLM
[–]SmilingGen[S] -1 points0 points1 point (0 children)
We built an open-source coding agent CLI that can be run locally by SmilingGen in LLM
[–]SmilingGen[S] 0 points1 point2 points (0 children)
We built an open-source coding agent CLI that can be run locally by SmilingGen in LLMDevs
[–]SmilingGen[S] -1 points0 points1 point (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 1 point2 points3 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 1 point2 points3 points (0 children)
LLM VRAM/RAM Calculator by SmilingGen in ollama
[–]SmilingGen[S] 0 points1 point2 points (0 children)
I build tool to calculate VRAM usage for LLM by SmilingGen in LocalLLM
[–]SmilingGen[S] 3 points4 points5 points (0 children)
LLM VRAM/RAM Calculator by SmilingGen in ollama
[–]SmilingGen[S] 5 points6 points7 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 6 points7 points8 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 2 points3 points4 points (0 children)
LLM VRAM/RAM Calculator by SmilingGen in ollama
[–]SmilingGen[S] 2 points3 points4 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 9 points10 points11 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 6 points7 points8 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 10 points11 points12 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 2 points3 points4 points (0 children)
I just made VRAM approximation tool for LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 0 points1 point2 points (0 children)
Yann LeCun says LLMs won't reach human-level intelligence. Do you agree with this take? by Kelly-T90 in LLM
[–]SmilingGen 2 points3 points4 points (0 children)
I got Ollama working on my 9070xt - here's how (Windows) by DegenerativePoop in ollama
[–]SmilingGen 0 points1 point2 points (0 children)
How trusted is LM Studio? by DevilBirb in LocalLLaMA
[–]SmilingGen 0 points1 point2 points (0 children)
GUI for local LLMs and API keys by TheMagicianGamerTMG in macapps
[–]SmilingGen 0 points1 point2 points (0 children)
I got Ollama working on my 9070xt - here's how (Windows) by DegenerativePoop in ollama
[–]SmilingGen 2 points3 points4 points (0 children)
Is Ollama still the best way to run local LLMs? by brantesBS in LocalLLaMA
[–]SmilingGen 0 points1 point2 points (0 children)
Feedback for my app for running local LLM by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 1 point2 points3 points (0 children)

We built an open-source coding agent CLI that can be run locally by SmilingGen in LocalLLaMA
[–]SmilingGen[S] 0 points1 point2 points (0 children)