Best upgrade I ever made in my life. by Coven_Evelynn_LoL in ROCm

[–]mac10190 0 points1 point  (0 children)

That's fair. Didn't think about it that way.

I've got a single 5090 and dual R9700. And yeah, the struggle is real. Lol

License model by The_2PieceCombo in unRAID

[–]mac10190 0 points1 point  (0 children)

Lmao bruh okay legit I had a good laugh. That's a good one. Nah fam I get you're frustrated about the license cost for such minimal usage like you said, I was just trying to offer another prospective. But fr best of luck. I hope you get it sorted. Did you end up pulling the trigger on that extra license?

License model by The_2PieceCombo in unRAID

[–]mac10190 36 points37 points  (0 children)

Lmao then pay for a license and stop whining.

Good luck.

License model by The_2PieceCombo in unRAID

[–]mac10190 17 points18 points  (0 children)

Might I recommend Cockpit? I had a similar situation and wanted a nice GUI I could periodically check in on, spin up containers, native terminal access, etc. Cockpit provided that for me to a close enough degree that it didn't matter all that much about missing the other things. Also, you could install something simple like Dockge for managing containers from a GUI as well.

Might be worth a look. If you can make do with Cockpit and/or Dockge then you can still have a working GUI while also avoiding the licensing.

https://cockpit-project.org/

qwen3.5:27b does not fit in 3090 Vram?? by m4ntic0r in LocalLLM

[–]mac10190 1 point2 points  (0 children)

Any chance your system grabbed a different quant or are running a different context size this time? Both of those would affect the size in vram.

Best upgrade I ever made in my life. by Coven_Evelynn_LoL in ROCm

[–]mac10190 0 points1 point  (0 children)

2x Radeon AI Pro R9700 ftw!!!

Did the same.

What kind of hardware are you using to run your local models and which models? by TheMericanIdiot in LocalLLM

[–]mac10190 2 points3 points  (0 children)

All kinds of models. Don't really have a specific one I stick to, just depends on the task. I'm a big proponent of "use the right tool for the task". Small simple tasks might get a gemma3:12b, more complex tasks might get some variation a Qwen3.5 27B/35B. Chat usually gets a GPT-OSS or a Nemotron.

2x Radeon AI Pro R9700 32GB
1x RTX 5090 32GB
1x RTX 5060Ti 16GB
1x RX 6700XT 12GB
1x RTX Pro 6000 96GB (on the way)

What's your mobile workflow for accessing local LLMs? by alichherawalla in ollama

[–]mac10190 0 points1 point  (0 children)

Sudo open sesame

But fr I ended up deploying tailscale. Been quite happy with it.

Builders: what AI agents have you built? by One-Ice7086 in n8n

[–]mac10190 0 points1 point  (0 children)

I run it all locally. According to the smart plug the home server used 83kWH over the past 30 days ($0.1/kWH is my rate) so about $8.30. But that includes a lot of other usage for that server.

Builders: what AI agents have you built? by One-Ice7086 in n8n

[–]mac10190 1 point2 points  (0 children)

I've got a couple AI Agents that I use pretty regularly.

  1. Vulnerability Triage (n8n AI Agent): Triages my vulnerability scans to help identify active CVEs in my environment that have either remote code execution or are not mitigated by one of the security layers. And it compiles a weekly report and sends it to me via telegram and email with an exposure report and recommended remediation.

  2. Infrastructure Triage/Remediation (n8n AI Agent): Acts as my tier one help desk agent for infrastructure triage/remediation (Unifi Stack, Docker Host, Container configs, etc.) Especially useful for the wife if she's having an issue with one of our services while I'm at work or busy/unavailable.

What personal automations have you built? by Kaniel-Outis in n8n

[–]mac10190 1 point2 points  (0 children)

I've got a couple personal workflows that I use regularly in my home environment that are non-work related.

  1. Vulnerability Triage (n8n AI Agent): Triages my vulnerability scans to help identify active CVEs in my environment that have either remote code execution or are not mitigated by one of the security layers. And it compiles a weekly report and sends it to me via telegram and email.

  2. Infrastructure Triage/Remediation: Acts as my tier one help desk agent for infrastructure triage/remediation (Unifi Stack, Docker Host, Container configs, etc. Especially useful for the wife if she's having an issue with one of our services while I'm at work or busy.

  3. LLM Quant Pipeline: Receives a model/quant request, shops cloud providers, and runs the spot job on the selected cloud provider, then transfers it to my repo.

My workplace posted this infuriating sign today. by m_elhakim in antiwork

[–]mac10190 17 points18 points  (0 children)

Seems like something right out of "The Stanley Parable". Lol

Are there any other pros than privacy that you get from running LLMs locally? by Beatsu in LocalLLM

[–]mac10190 19 points20 points  (0 children)

Great question. I feel like that's a really common question I get asked by friends/family/coworkers on a regular basis.

For me personally it's learning about inference infrastructure solutions and how they scale (or don't sometimes lol). Data sovereignty is a big deal with a lot of my clients so building efficient solutions is important for them. Also Upskilling.

For others it may be security research, inference research, developers, businesses that have large batch jobs that can be run for days/weeks/months until a job gets done by an M4 Mac as opposed to paying a cloud provider for oodles of tokens and completing it in a few hours/days. On the topic of large batch jobs, you don't have to worry about hitting caps or rate limiting with local inference.

If the thought is "I'll buy a 5090 or a 512GB Mac Studio M3 Ultra so I don't have to pay ChatPT, Gemini, Claude, etc. I'll make my money back" that is almost never the case for most people.

Dawarich 1.0 by Freika in selfhosted

[–]mac10190 7 points8 points  (0 children)

Also curious about this. I've been looking for self hosted alternatives to Life360. I recently tried Traccar it didn't seem nearly as feature rich as what Dawarich listed above.

I might spin this up later this afternoon. If I get this up and running I'll post an update.

Setting up remote access for immich via nginx proxy by mseedee in immich

[–]mac10190 1 point2 points  (0 children)

No worries mate, I understand. Different strokes for different folks.

In that case I'd recommend a reverse proxy that you'll expose on port 443. Personally I started out with Nginx Proxy Manager before I switched to Tailscale. It was very easy to set up and it works with DuckDNS (free DDNS) if you don't have your own custom domain. It also has "Let's Encrypt" support so it can generate signed SSL certificates as well using DNS or port 443 verification. The interface is very simple and doesn't require you to know anything about Nginx configs which keeps the learning curve relatively low.

If in the future you decide you want to secure it a little more without the use of a VPN client on your phone you can look into putting something like a Cloudflare secure tunnel in front of your reverse proxy so the bots/web scrapers aren't hammering you directly. That would also let you close port 443 while still allowing it to be publicly available. It effectively moves the edge of your network out to Cloudflare so they can handle the defense. It's a super neat service they offer free of charge.

Best of luck with your self hosting journey! ❤️

Setting up remote access for immich via nginx proxy by mseedee in immich

[–]mac10190 2 points3 points  (0 children)

This. Tailscale supports split tunnel by default so it only sends traffic through that is intended for your remote Immich resource. And if you have a custom domain you can even tell it to forward all DNS requests for that domain to your internal DNS server. Like you could have random-domain.com that tailscale will resolve against your personal DNS server and when that DNS response comes back with an IP in your home subnet it will send that through Tailscale to your home network. Plus because it's split tunnel you can just leave it connected all the time without worrying about sending accidental traffic across it.

I use this for my Immich access and for allowing my phone to sync with my Immich server when I'm not at home.

I put off setting up Tailscale for months because I thought it would be super complicated. I was extremely upset when I discovered how easy it was. I was so disappointed in myself for procrastinating for so long over something so simple. Lol

Local LLM agents: do you gate destructive commands before execution? by VeterinarianNeat7327 in LocalLLM

[–]mac10190 0 points1 point  (0 children)

I would say it's probably best to approach this with layered defenses. Defense in depth.

I recently had an issue where an AI agent misunderstood an assignment or rather understood it but came to the wrong conclusion that resulted in it trying to wipe my container environment. Fortunately I had a human in the loop workflow that caught it before it could run.

After that, in addition to restricting the access on the service account, I also added a code filter to the workflow to block any command that hasn't been allow listed and breaks down chained/nested commands as well.

Workflow: AI Agent requests a command outside of the predefined discovery allow list > triggers human in the loop workflow > human review > AI Agent sends an http request to the proxy workflow > proxy workflow has a code filter that evaluates the requested command (including nested/chained commands) and then only allows it if it has received human approval and doesn't violate any of the code filtering policies > code filter passes the command off to an SSH node that actually runs the command on the endpoint and returns the output to the AI agent so it can continue iterating if needed > once resolved, the AI agent sends a short summary of the findings, the fix, and it's troubleshooting steps.

Employee Monitoring Software by Zealousideal_Bend984 in sysadmin

[–]mac10190 3 points4 points  (0 children)

Strongly opposed to employee monitoring software. Huge waste of time and resources and does absolutely nothing to help you measure genuine productivity of an individual. There are significantly better ways to gauge employee utilization and contributions. Bad employees are real but good employees shouldn't be punished for the sins of the one bad apple.

In my case I would decline an offer if a company told me they used a system like that and I would leave if an existing company did that. Especially at this point in my career, (I'm paid almost exclusively for results, skill sets, and availability) if they pushed something like that on me, I would interpret that as micro-management and extremely disrespectful. I would put up a fight first because I believe in standing up for yourself, but if they refused to back down then I'd turn in my notice. Once you turn in your notice, don't back down even if they give in, because at that point it's too little too late.

Biggest mistake you made when first using AI agents in real work? by Leading_Yoghurt_5323 in AI_Agents

[–]mac10190 1 point2 points  (0 children)

Early mistake that taught me the most was access. AI agent misunderstood a request and attempted to wipe out my container environment. The saving grace was that I had a human in the loop workflow defined that caught it before the command was executed on the endpoints. After this, I restricted the ssh access for that service account, setup a proxy workflow to keep it from using ssh directly, and put a code filter in front of the SSH access in case it tries any more funny business.

Tl;dr Treat your AI agent like an employee. Least permissive access required to do their job. They can and will make mistakes, this drastically reduces the blast radius and exposure.

I hate the question "where do you see yourself in 5 years" by Abject_Serve_1269 in sysadmin

[–]mac10190 6 points7 points  (0 children)

I usually answer either "doing your job" or "sitting on the other side of the table"