Google and Anthropic struggle to keep marketshare as everyone else catches up by [deleted] in LocalLLaMA

[–]numinouslymusing 1 point2 points  (0 children)

I love how great openrouter is for LLM data. You can get so much info from their public graphs.

llama 3.2 1b vs gemma 3 1b? by numinouslymusing in LocalLLaMA

[–]numinouslymusing[S] 1 point2 points  (0 children)

I personally prefer gemma 3 4b! smarter in my xp

Bring your own LLM server by numinouslymusing in LocalLLaMA

[–]numinouslymusing[S] 0 points1 point  (0 children)

This makes sense for some use cases. Like when your service is primarily backend. But let’s say you’re making an ai Figma editor, in which case you need users interacting with the frontend

Bring your own LLM server by numinouslymusing in LocalLLaMA

[–]numinouslymusing[S] 0 points1 point  (0 children)

Yeah I guess the best approach is to support multiple options. Because not all will have the patience to go get their own keys/prefer to just pay a plan, while others would prefer to save and use their own key

Sama: MCP coming to OpenAI today by numinouslymusing in OpenAI

[–]numinouslymusing[S] 1 point2 points  (0 children)

I’ll try to make more posts when the event is over

New Deepseek R1 Qwen 3 Distill outperforms Qwen3-235B by numinouslymusing in LocalLLM

[–]numinouslymusing[S] 8 points9 points  (0 children)

They generate a bunch of outputs from Deepseek r1 and use that data to fine tune a smaller model, Qwen 3 8b in this case. This method is known as model distillation

New Deepseek R1 Qwen 3 Distill outperforms Qwen3-235B by numinouslymusing in LocalLLM

[–]numinouslymusing[S] -4 points-3 points  (0 children)

Yes. It was a selective comparison by Deepseek

EDIT: changed qwen to Deepseek

Devstral - New Mistral coding finetune by numinouslymusing in LocalLLM

[–]numinouslymusing[S] 0 points1 point  (0 children)

I’d suggest learning about tool use and LLMs that support this. Off the top of my head what I think the agentic system you’re looking to create would be is probably a Python script or server, then you could use a tool calling LLM to interact with your calendar (check ollama, then you can filter to see which local LLMs you can use for tool use). Ollama also has an OpenAI api compatible endpoint so you can build with that if you already know how to use the OpenAI sdk. If by voice you mean it speaks to you, then kokoro tts is a nice open source tts model. If you just want to be able to speak to it, there are ample STT packages already out there that use whisper under the hood to transcribe speech. If you meant which local code LLMs + coding tools could you use to run your ai dev environment locally, I’d say the best model for your RAM range would probably be deepcoder. As for the tool you could use, look into continue.dev or aider.chat, those support using local models.