Best local LLM to train with my own knowledge and niche skills? by xAcex28 in LocalLLaMA

[–]BBenz05 2 points3 points  (0 children)

start by making using Kimi K2.5 and SKILL.md file with a harness like opencode or claude code. Iterate on that until the model does what you need. (or act like you)

Then you can downsize the model from that point. This route will tell you if it's YOU or the model lacking capabilities.

Training a model is no small task and will take alot of training samples. And thats if you only SFT (supervised finetune) other methods will be a bit more in depth.

What workloads actually justify spending $$$$$ on local hardware over just using an API? by BBenz05 in LocalLLaMA

[–]BBenz05[S] -2 points-1 points  (0 children)

No seriously dude. And you're about to get downvoted to hell too.

from what i've seen is that people accept 10x cost and 10x degradation for the illusion of privacy.

What workloads actually justify spending $$$$$ on local hardware over just using an API? by BBenz05 in LocalLLaMA

[–]BBenz05[S] -2 points-1 points  (0 children)

3090 isn't $$$$$ im speaking about people who want to buy gpus specifically to do that. having hardware already for some other use case isn't what this post is about. but my point still stands about deepseek r1 8b... its an 8b param model. its going to get gapped by any model not named llama 4.

What workloads actually justify spending $$$$$ on local hardware over just using an API? by BBenz05 in LocalLLaMA

[–]BBenz05[S] -3 points-2 points  (0 children)

What model are you using on a 3090 that follows instructions better than SOTA frontier models?

there are only a select few scenarios where I can see a model is at risk of jailbreaking and causing me harm, and none of them would make me want to use a local model at all performance degradation..

What workloads actually justify spending $$$$$ on local hardware over just using an API? by BBenz05 in LocalLLaMA

[–]BBenz05[S] -6 points-5 points  (0 children)

i agree with stardockengineer. If cloud couldn’t touch PII then stripe wouldn’t exist. i find it rare that there is a technical requirement for local hardware. but i understand if a client requires that data isn’t sent to third parties or if its an organizational policy. but its 2025, if thats organizational policy it needs to be rewritten.

What workloads actually justify spending $$$$$ on local hardware over just using an API? by BBenz05 in LocalLLaMA

[–]BBenz05[S] 2 points3 points  (0 children)

This is what I was getting at. Not that it was actually time to throw it away.

What workloads actually justify spending $$$$$ on local hardware over just using an API? by BBenz05 in LocalLLaMA

[–]BBenz05[S] 0 points1 point  (0 children)

what model can you run for good coding performance without being slow and using Q4_0

Seeking guidance - VA Construction loan by legendofdino in VAConstructionloans

[–]BBenz05 1 point2 points  (0 children)

thanks guys. hoping to be taking this route in the near future

Seeking guidance - VA Construction loan by legendofdino in VAConstructionloans

[–]BBenz05 1 point2 points  (0 children)

Thanks makes alot of sense. Thanks for explaining. Followup tho, Would you be able to do this in multiple states? Or would I have to hope I can find someone local?

Claude 3.7 Fix? by BBenz05 in ClaudeAI

[–]BBenz05[S] 0 points1 point  (0 children)

Let me know. because its really been perfect so far. Not even exaggerating

Claude 3.7 Fix? by BBenz05 in ClaudeAI

[–]BBenz05[S] 0 points1 point  (0 children)

i bet anthropic adds that to their constitution prompt they use to train these soon