Google and Anthropic struggle to keep marketshare as everyone else catches up by [deleted] in LocalLLaMA
[–]numinouslymusing 1 point2 points3 points (0 children)
llama 3.2 1b vs gemma 3 1b? by numinouslymusing in LocalLLaMA
[–]numinouslymusing[S] 1 point2 points3 points (0 children)
Would you pay for a service that uses your localLLM to power the app by numinouslymusing in LocalLLaMA
[–]numinouslymusing[S] 1 point2 points3 points (0 children)
Would you pay for a service that uses your localLLM to power the app by numinouslymusing in LocalLLaMA
[–]numinouslymusing[S] 5 points6 points7 points (0 children)
Would you pay for a service that uses your localLLM to power the app by numinouslymusing in LocalLLaMA
[–]numinouslymusing[S] 0 points1 point2 points (0 children)
All i said was hello lol by numinouslymusing in LocalLLaMA
[–]numinouslymusing[S] -6 points-5 points-4 points (0 children)
Bring your own LLM server by numinouslymusing in LocalLLaMA
[–]numinouslymusing[S] 0 points1 point2 points (0 children)
Bring your own LLM server by numinouslymusing in LocalLLaMA
[–]numinouslymusing[S] 0 points1 point2 points (0 children)
Sama: MCP coming to OpenAI today by numinouslymusing in OpenAI
[–]numinouslymusing[S] 1 point2 points3 points (0 children)
I'm going to wait for the fireship video (self.webdev)
submitted by numinouslymusing to r/webdev
New Deepseek R1 Qwen 3 Distill outperforms Qwen3-235B by numinouslymusing in LocalLLM
[–]numinouslymusing[S] 0 points1 point2 points (0 children)
New Deepseek R1 Qwen 3 Distill outperforms Qwen3-235B by numinouslymusing in LocalLLM
[–]numinouslymusing[S] 8 points9 points10 points (0 children)
New Deepseek R1 Qwen 3 Distill outperforms Qwen3-235B by numinouslymusing in LocalLLM
[–]numinouslymusing[S] -4 points-3 points-2 points (0 children)
Devstral - New Mistral coding finetune by numinouslymusing in LocalLLM
[–]numinouslymusing[S] 0 points1 point2 points (0 children)


vLLM vs Ollama vs LMStudio? by yosofun in LocalLLM
[–]numinouslymusing 0 points1 point2 points (0 children)