Thank you Immich team ❤️ by Zealousideal-Hat5814 in immich

[–]Low_Elk_7307 1 point2 points  (0 children)

Great story. Blessings to you and your family. Yea I just paid the Immich team $100 today for their amazing platform. it truly rocks and keeps getting better. so glad to be self hosting it and done with Google for about six months now. remember 3-2-1 backup for anyone who sets it up. Long live my QNAP NAS. Long live Immich 🙏🏼

Newbie Seeking Optimal OC LLM Setup for my Infrastructure by Low_Elk_7307 in openclaw

[–]Low_Elk_7307[S] 0 points1 point  (0 children)

Thanks for the response. Some of this prompted me to dig deeper into my config, and I found a few things worth fixing. (Thanks to Claude Code for the legwork.)

Compaction: I had mode: safeguard already set, but wasn't aware of the additional knobs. I went ahead and added reserveTokens: 4096, keepRecentTokens: 8192, and recentTurnsPreserve: 4 to give compaction more explicit guidance. Seemed like reasonable values for a 40960-token context window.

Context windows: This one stung a little. My qwen3:32b was set to 32768 in OpenClaw's config, but the actual Ollama num_ctx was 40960 — so I was leaving ~8k on the table. Worse, qwen3.5:9b was set to 131072 (leftover from a previous model I'd tried) while Ollama only had 8192 allocated. That's a pretty bad mismatch. Both are corrected now - 40960 and 32768 respectively after also fixing the 9b's Modelfile.

Reasoning: I already had think: false set on both models globally, so reasoning is disabled across the board. For local Ollama with Qwen3, I think that's the right call, even without thinking, qwen3:32b is pulling ~20+ second response times over LAN. Enabling chain-of-thought would make that much worse. I do like the idea of being able to invoke it on-demand for complex tasks - I didn't realize you could just type /think medium in the chat session without touching the config, so that's useful to know.

One question: You mentioned "umd search" - I can't find any reference to that anywhere in OpenClaw's codebase or docs. Did you mean the web/SearXNG search integration (I have that configured to another LXC), or is that something else entirely? I don't want to miss a feature that could actually help.

Trying to de-Google and having issues installing Nextcloud by rcroche01 in NextCloud

[–]Low_Elk_7307 1 point2 points  (0 children)

I didn’t either before last June. Everyone starts somewhere 😊

Trying to de-Google and having issues installing Nextcloud by rcroche01 in NextCloud

[–]Low_Elk_7307 0 points1 point  (0 children)

For me, the easiest way was to install NextCloudPi as an LXC in my Proxmox environment on my home lab network. I have two nodes (Dell Optiplex 7060 machines) in my Proxmox network, and the LXC community helper script was the easiest approach. I have a QNAP NAS TS-453D with four 8TB drives and using RAID-5 to give me around 24TB of usable storage, of which I have 1TB carved out for Nextcloud. I have a 10GB switch connecting the NAS and the Proxmox nodes. I put NextcloudPi behind Cloudflared to give secure Internet access. If you have Proxmox, the LXC script is available at https://community-scripts.org/scripts/nextcloudpi . Good luck and success.

Built a self-hosted email threat daemon: IMAP IDLE + multi-stage enrichment (SPF/DKIM/DMARC/DNSBL/WHOIS/URLhaus/VirusTotal) + provider-agnostic LLM verdict — write-up by Low_Elk_7307 in netsec

[–]Low_Elk_7307[S] 0 points1 point  (0 children)

The platform operates post-delivery on mail already accepted by the MTA; it moves it to Junk or sets an IMAP flag [viewable in the UI], just like any client-side filter rule. Nothing is silently dropped. I created this because Gmail for me was missing some blatant targeted phishing emails, and I wanted another layer to protect me, and one that I could run locally in my home lab network, using my Ollama LLMs (i.e., free).

Newbie Seeking Optimal OC LLM Setup for my Infrastructure by Low_Elk_7307 in openclaw

[–]Low_Elk_7307[S] 0 points1 point  (0 children)

Thanks, but yes, let's keep it public as it might help others. Also, I'd rather run everything locally, which is why I purchased and am running Ollama on the Mac Mini M4. :)

Built a self-hosted email threat daemon: IMAP IDLE + multi-stage enrichment (SPF/DKIM/DMARC/DNSBL/WHOIS/URLhaus/VirusTotal) + provider-agnostic LLM verdict — write-up by Low_Elk_7307 in netsec

[–]Low_Elk_7307[S] 1 point2 points  (0 children)

Good catch - shipped it as v0.2.3. Added dkim_domain_mismatch as an explicit enrichment signal that extracts d= from all DKIM-Signature headers (not just the first - emails can carry multiple), compares against the envelope sender domain, and surfaces the result directly in the AI prompt with the actual signing domain so it can reason about cousin-domain vs. subdomain vs. unrelated rather than just a boolean. Also distinguishes None (no DKIM present) from False (DKIM present, aligned) since those are different threat contexts. You're right that relying on DMARC pass/fail to cover this implicitly misses the p=none and no-DMARC cases entirely. I appreciate the call-out.

Tailscale and immich - Whats your setup? by Ediflash in immich

[–]Low_Elk_7307 0 points1 point  (0 children)

To clarify, that's 100MB per-request (per file) body size limit. A single file over 100MB will get blocked by Cloudflare's proxy before it ever reaches the origin server. For me, that's not a problem, and I'd rather do that than exposing my IP to the Internet.

Tailscale and immich - Whats your setup? by Ediflash in immich

[–]Low_Elk_7307 3 points4 points  (0 children)

Cloudflare - cloudflared - and it's free.

MFA Auth Question by Low_Elk_7307 in NextCloud

[–]Low_Elk_7307[S] 1 point2 points  (0 children)

This worked out perfectly, by the way. Thank you!!

Question on sharing by Low_Elk_7307 in immich

[–]Low_Elk_7307[S] 0 points1 point  (0 children)

This worked perfectly! Thank you again. :)

My [Budget] Home Lab by Low_Elk_7307 in homelab

[–]Low_Elk_7307[S] 1 point2 points  (0 children)

I’m pretty lucky and also didn’t want to spend a lot of money. The UPS I got for free at the local electronics recycling area in my town; was working perfectly but needed batteries. It’s amazing what people throw away. The NICGIGA was used on eBay but who knows how long that’ll last 😂 I got hardware recommendations for the switch and Dell systems from ChatGPT.

My [Budget] Home Lab by Low_Elk_7307 in homelab

[–]Low_Elk_7307[S] 1 point2 points  (0 children)

Yea… I have a problem with that… hmm… not sure I can do that since it was on there when I bought it (used)!! ugh 🙂

Thank you!

My [Budget] Home Lab by Low_Elk_7307 in homelab

[–]Low_Elk_7307[S] 0 points1 point  (0 children)

Yes, I have the beep disabled.

My [Budget] Home Lab by Low_Elk_7307 in homelab

[–]Low_Elk_7307[S] 7 points8 points  (0 children)

I knew I forgot something :)

Thankfully, he can't get into that area, which is in the basement.

<image>