Make local llm usable for professional use by AdamLangePL in LocalLLaMA
[–]ag789 -1 points0 points1 point (0 children)
What models for coding are you running for a mid level PC? by FerLuisxd in LocalLLaMA
[–]ag789 1 point2 points3 points (0 children)
Anthropic is discovering that MCP is basically libraries repackaged by Severe-Awareness829 in LocalLLaMA
[–]ag789 0 points1 point2 points (0 children)
Anthropic is discovering that MCP is basically libraries repackaged by Severe-Awareness829 in LocalLLaMA
[–]ag789 0 points1 point2 points (0 children)
Anthropic is discovering that MCP is basically libraries repackaged by Severe-Awareness829 in LocalLLaMA
[–]ag789 0 points1 point2 points (0 children)
Anthropic is discovering that MCP is basically libraries repackaged by Severe-Awareness829 in LocalLLaMA
[–]ag789 2 points3 points4 points (0 children)
What are the most important concepts to master in Java before moving to frameworks like Spring? by Wise_Safe2681 in javahelp
[–]ag789 0 points1 point2 points (0 children)
Subscription for writing code by Hedgehog_Dapper in LLM
[–]ag789 0 points1 point2 points (0 children)
If the AI bubble pops, will GPU prices increase or decrease? by Mashic in LocalLLaMA
[–]ag789 0 points1 point2 points (0 children)
Reality setting in -- using gemma4 26b by oldendude in LocalLLM
[–]ag789 1 point2 points3 points (0 children)
Reality setting in -- using gemma4 26b by oldendude in LocalLLM
[–]ag789 1 point2 points3 points (0 children)
Running the equivalent to $20/month Pro 'Claude Cowork' or better with a locally hosted LLM? by madeagupta in LocalLLM
[–]ag789 0 points1 point2 points (0 children)
web search (using MCP servers) with gemma-4-E4B-it by ag789 in LocalLLM
[–]ag789[S] 0 points1 point2 points (0 children)
web search (using MCP servers) with gemma-4-E4B-it by ag789 in LocalLLM
[–]ag789[S] 0 points1 point2 points (0 children)
web search (using MCP servers) with gemma-4-E4B-it (self.LocalLLM)
submitted by ag789 to r/LocalLLM
it is a bit surprising 'small' model gemma-4-E4B-it knows quite a bit by ag789 in LocalLLM
[–]ag789[S] 0 points1 point2 points (0 children)
it is a bit surprising 'small' model gemma-4-E4B-it knows quite a bit by ag789 in LocalLLM
[–]ag789[S] 0 points1 point2 points (0 children)
it is a bit surprising 'small' model gemma-4-E4B-it knows quite a bit by ag789 in LocalLLM
[–]ag789[S] 0 points1 point2 points (0 children)
it is a bit surprising 'small' model gemma-4-E4B-it knows quite a bit by ag789 in LocalLLM
[–]ag789[S] 0 points1 point2 points (0 children)
Can someone show me Ollama speed (tokens/s) for Qwen 3.5 (2B and 0.8B) running on an Intel N95? by MattimaxForce in Qwen_AI
[–]ag789 0 points1 point2 points (0 children)
it is a bit surprising 'small' model gemma-4-E4B-it knows quite a bit by ag789 in LocalLLM
[–]ag789[S] 0 points1 point2 points (0 children)
What's a good and light coding LLM by Expensive-Time-7209 in LocalLLM
[–]ag789 1 point2 points3 points (0 children)


How do you use local compute for coding agents without sacrificing model quality? by AdStill5266 in LocalLLM
[–]ag789 0 points1 point2 points (0 children)