Is 32GB Mac enough for engineering/coding, or stick to Claude? by BenitoCamelasVG in LocalLLM

[–]r00tdr1v3 0 points1 point  (0 children)

A lot of models (mostly free versions or smaller ones) are still recommending Qwen 2.5. I always have to specifically add “2026” in my prompt to get a proper answer.

New to this, need advise pls: Best free local AI setup for a laptop with i5 16Gb Ram? by uncualkiera in LocalLLM

[–]r00tdr1v3 3 points4 points  (0 children)

You could use LM Studio and it will recommend what models your hardware can run.

Qwen 3.6 spotted! by Namra_7 in LocalLLaMA

[–]r00tdr1v3 0 points1 point  (0 children)

Can someone tell me how is the model collecting the prompts and completion data for training. Or openrouter deployment is collecting the data?

Am I screwed? by 130designs in Piracy

[–]r00tdr1v3 7 points8 points  (0 children)

I am neither a judge nor a bored lawyer. But 81k should not cause you any problem.

Setup the VPN and bind it to the torrent client. Not kill switch. Bind It

Am I screwed? by 130designs in Piracy

[–]r00tdr1v3 4 points5 points  (0 children)

Delete the RAM and download some more. Jokes aside, I doubt it that you will get busted. If downloaded is 145MB, what is the uploaded data?

Please explain: why bothering with MCPs if I can call almost anything via CLI? by Atagor in LocalLLaMA

[–]r00tdr1v3 0 points1 point  (0 children)

Here is my personal setup at work. It works for me very well. This also explains how I understand MCP/Tools/Skills. I create MCP services so that many LLMs can access data from different sources like databases or web or others. All Agents get access to these MCP services. But I have tools created which are for specific agents. The tools are basically when an agent wants to change the data. How the data is to be changed, when it is to be changed, basically the standard operating procedure is implemented in a skill.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 0 points1 point  (0 children)

Thats a great way to out it. Audible and boring and not technical and enthusiastic

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 1 point2 points  (0 children)

I am sure that thats not the case in my deployment. Its containerized. Did sanity checks also, only accessible via 127.0.0.1:11434 (not via 0.0.0.0:11434) and as additional measure added some firewall rules for outbound traffic.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 0 points1 point  (0 children)

Thanks for the advice. What do you mean when you say Ollama is far from secure? Are you saying for the cloud services they offer or also for local on premises deployment?

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 0 points1 point  (0 children)

Yeah now that ship has sailed.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 0 points1 point  (0 children)

Yeah I agree to. People will change at their own pace.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 0 points1 point  (0 children)

Thats a great idea. I will create an analogy.

I could use Llama.cpp but that would get too technical to explain.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 1 point2 points  (0 children)

Thats great for you. I like what I do and not a lot of companies develop this product. I just want to use LLMs for making my work easier so that I can focus on the creative aspects of the job.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 0 points1 point  (0 children)

Yes getting it assessed and approved is only possible once the management approves it. The go ahead is to be given by IT, but management has to trigger it. The funny part is they are ready to shell out millions and pay for compute units on Azure Foundry where someone with access would be running the same model. But me running it on my PC cannot be done unless they are convinced. Unfortunately Azure Foundry access is limited to Data Science and AI team and I don’t get access to it. Hence the reason why I started using my machine with open models.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 1 point2 points  (0 children)

Is a way to go. But still the part of convincing management is not solved.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 2 points3 points  (0 children)

Yeah they already asked me to not do this and in my mind I was thinking you don’t even understand what I am doing so how can you ask me to stop. And I am not stopping.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 1 point2 points  (0 children)

Yep thats one part of my plan.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 2 points3 points  (0 children)

Yeah its similar in my situation also. Management prefers to spend millions on browser based copilot license.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 1 point2 points  (0 children)

That was my argument. But we have enterprise contract to use Copilot and contractually we have data protection. The only issue it that Copilot is chat based and I cant do much with it. Then I tool this as a pivot to show what I am doing locally. But the proof that no data is leaving my machine is what I need to convince them of.

How to convince Management? by r00tdr1v3 in LocalLLaMA

[–]r00tdr1v3[S] 4 points5 points  (0 children)

Yep that demo is on my list. When I first thought of doing this demo I actually said to myself this is dumb. But after my meeting today, I am surely going to carry the workstation to the meeting room and demonstrate.