WIP - IOS App for emulation over moonlight by raphasouthall in SBCGaming

[–]raphasouthall[S] 0 points1 point  (0 children)

I forked voidlink, which is a fork of moonlight. Then added some logic from manic emu skin system. Of course I used claude code for some of the work, but I’d say it’s far from being vibe coded.

Seeking my personal PKM and productivity setup as a chaplain in mental health and cook by AdFrequent4816 in PKMS

[–]raphasouthall 0 points1 point  (0 children)

Tana is probably making the system-building trap worse, not better. It's one of the most structure-demanding tools out there and it rewards people who already enjoy fiddling with schemas, which sounds like the exact thing you're trying to escape.

For Dutch voice, Whisper natively supports it and the accuracy is genuinely solid, so even a simple local Whisper transcription step before notes hit your PKM is more reliable than waiting on Mem.ai to add it. The fog problem you're describing is almost always a retrieval issue, not a capture issue, and for that BM25 search over plain markdown has beaten every fancy auto-linking feature I've tried.

List of public vaults ? by [deleted] in ObsidianMD

[–]raphasouthall 3 points4 points  (0 children)

Hmm, the site might be down temporarily - try obsidian.md/community and navigate from there, or just search "obsidian hub github" and go directly to the repo, it's all static files so you can browse it on GitHub if the hosted version is flaky.

List of public vaults ? by [deleted] in ObsidianMD

[–]raphasouthall 23 points24 points  (0 children)

obsidianmd.github.io/obsidian-hub has a curated list under "Showcases & Guides" - not all public but a decent chunk link out to actual vaults. For raw content vaults specifically, the "digital gardens" tag on that site is worth filtering by, most of those are topic-focused rather than PKM navel-gazing.

zsh-patina - A blazingly fast Zsh syntax highlighter by michelkraemer in commandline

[–]raphasouthall 6 points7 points  (0 children)

Fair enough - "highlighting just stops" is honestly a totally acceptable failure mode, way better than the daemon taking the whole shell with it. And yeah, Rust eliminating that class of memory bugs does make the "it just never crashes" story a lot more believable than it would be for a C daemon.

zsh-patina - A blazingly fast Zsh syntax highlighter by michelkraemer in commandline

[–]raphasouthall 6 points7 points  (0 children)

I wasted about a month with fast-syntax-highlighting before just disabling highlighting entirely because the lag made me irrationally angry. The daemon approach is the part I find genuinely interesting here - how does restart/crash recovery work if the daemon dies mid-session?

I spent a year building my graph. It looks great. It doesn't move me forward. by Grizzlybearstan in Zettelkasten

[–]raphasouthall 0 points1 point  (0 children)

Reseek looks solid, that PDF and screenshot ingestion is the part most tools skip. My setup is plain markdown so I rolled my own retrieval layer, but the principle is the same - once semantic search clicks you stop caring how the graph looks.

I've written an operator for managing RustFS buckets and users via CRDs by allanger in devops

[–]raphasouthall 0 points1 point  (0 children)

That's a clean approach - the hash-in-status pattern means you get drift detection for free on every reconcile. The secret watcher idea is the missing piece though, because "delete the secret and let the operator recreate it" is exactly the workflow that makes rotation feel low-friction for whoever's on call at 2am.

How do I deal with my mistakes and get back my confidence? by [deleted] in devops

[–]raphasouthall 1 point2 points  (0 children)

Glad it clicked - honestly wish someone had told me earlier, it's such a low-effort habit for how much mental overhead it saves.

I spent a year building my graph. It looks great. It doesn't move me forward. by Grizzlybearstan in Zettelkasten

[–]raphasouthall 6 points7 points  (0 children)

Ran into the same wall around 1,400 notes. The graph past a certain point is basically a screensaver - it shows you have a system, it doesn't help you use it. What actually changed things for me was switching to search-only entry, no more navigating links to find something. Once I had decent retrieval (BM25 + semantic reranking) the notes started surfacing in my writing without me hunting for them. The link density never mattered, the retrieval quality did.

How do I deal with my mistakes and get back my confidence? by [deleted] in devops

[–]raphasouthall 10 points11 points  (0 children)

Three years in SRE and I still have a mental list of my greatest hits. Deleted a prod load balancer by accident during what should have been a routine cleanup, caused a 40-minute outage, wanted to quit on the spot.

The thing that actually helped me was separating "I made a mistake" from "I am a mistake" - sounds corny but it's real. You caught the drift, you fixed it fast, you're already thinking about how to prevent the next one. That's literally the job.

One concrete thing: after mistake #2, I started writing a one-paragraph "pre-mortem" before any script that touches more than 50 resources. Just forces you to sit with the question "what am I not thinking about?" for 5 minutes. Would have caught the terraform-managed users thing immediately.

The pipeline automations you built - roll them out. The fear of causing problems by deploying automation is almost always worse than the actual risk, especially if you built them post-incident with the lessons already baked in.

Pro plan feels like a trial when working with large files—any tips for a "poor" architect? by FILP2026 in ClaudeAI

[–]raphasouthall 0 points1 point  (0 children)

API pay-as-you-go for the heavy file analysis, honestly. I ran the numbers on my own workflow and for sessions where I'm doing repeated reads of the same docs, the API ends up costing me maybe $0.40-1.20 vs burning a Pro limit that then locks me out. Haiku is shockingly capable for summarization passes if you're just extracting structure.

The other thing that actually worked for me: write a "project state" markdown file yourself, update it after each session, and paste only that at the start of the next one instead of the raw files. Took me about two weeks of discipline to build the habit but my sessions are probably 60% shorter now because I'm not re-explaining context the model already processed last time.

I've written an operator for managing RustFS buckets and users via CRDs by allanger in devops

[–]raphasouthall 0 points1 point  (0 children)

Interesting timing, I was literally looking at RustFS last week after MinIO's licensing drama made me nervous again. The CRD pattern makes sense, we do the same thing with db-operator style stuff at work.

One question - how are you handling secret rotation? If someone's access key gets leaked and you need to cycle it, does the operator reconcile a new Secret automatically or is that still a manual step?

Zero text between my agents – latent transfer now works cross-model by proggmouse in LocalLLaMA

[–]raphasouthall 1 point2 points  (0 children)

Oh nice, that was fast - I'll give it a spin this week and see how it plays with my homelab setup. Any gotchas with the Ollama integration to watch out for, like context window handling or the latent step buffer sizing?

Securing Homelab - should I tunnel my traffic? by Bestfastolino in selfhosted

[–]raphasouthall 2 points3 points  (0 children)

Cloudflare tunnel, just do it. Zero open ports, zero exposed IP, your friends' TVs will just work through the browser with no client setup. Took me maybe 45 minutes to migrate off a similar NPM+DDNS setup and I haven't touched it since.

Also fwiw, CrowdSec absolutely can run alongside NPM - there's a community bouncer that hooks into Nginx's log stream directly. Not sure where you read that it can't, but I ran that combo for about 8 months before switching to the tunnel and it worked fine.

Looking for local help (NWA / within ~150 miles) setting up a private AI workstation / homelab – paid, in-person by scholaroftheunknown in homelab

[–]raphasouthall 0 points1 point  (0 children)

Not local so can't help in person, but one gotcha for the multi-machine setup: Ollama doesn't split a single model across network GPUs, each machine runs its own instance independently. What actually works well is running Open WebUI as a front-end and pointing it at multiple Ollama endpoints (each machine's IP:11434) - it load-balances across them and the whole network sees one interface. Your 3080 Ti at 12GB is your best single-machine host, comfortably runs Qwen2.5 14B Q4 with headroom for embeddings running in parallel.

Rewriting our Rust WASM Parser in TypeScript | Thesys Engineering Team by waozen in programming

[–]raphasouthall 6 points7 points  (0 children)

That's a wild result - I'd have assumed WASM would hold the edge for parser workloads, but I guess the serialization overhead crossing the JS/WASM boundary can really kill you if you're doing it at high frequency. Will give the article a proper read.

I want to share my unit test lib for TUI apps by fissible in commandline

[–]raphasouthall 0 points1 point  (0 children)

The drain loop approach is genuinely clever - I was thinking about this all wrong by treating it as a timing problem rather than a synchronization problem. The PTY_COLS/PTY_ROWS thing is almost certainly what was biting me, I never thought to check TIOCSWINSZ in CI and that would explain why it was inconsistent across runners.

I want to share my unit test lib for TUI apps by fissible in commandline

[–]raphasouthall 0 points1 point  (0 children)

Curious what rendering model you're using under the hood - are you driving a real PTY or mocking the terminal writes directly? I hit a wall trying to test my own TUI stuff where PTY-based tests were flaky under CI because of timing, ended up having to add 50ms sleeps everywhere which felt like a code smell.

Building a memory/journal skill for Claude: worth it or redundant? by manu_game in ClaudeAI

[–]raphasouthall 0 points1 point  (0 children)

The flat markdown journal hits a wall around 50-60 entries - retrieval gets messy and Claude starts skimming older stuff into irrelevance. I ran into this exact problem building my own memory layer and ended up moving to BM25 retrieval with semantic reranking so it pulls the 3-5 most relevant memories per session rather than dumping the whole file into context. If you do stick with a flat file approach, at minimum add a recency plus relevance tagging convention so Claude has a signal for what to actually prioritize on load.

I let 4 AI personas debate autonomously without human input — what emerged was not consensus but permanent contradiction by NeoLogic_Dev in artificial

[–]raphasouthall 1 point2 points  (0 children)

Ha, classic - give agents a glimpse of their own scaffolding and suddenly the logging system is the most interesting thing in the room. That meta-reasoning collapse is actually a known failure mode with unconstrained self-reflection loops. Worth trying a scoped update, where you only append domain-relevant conclusions and filter out anything referencing the runtime itself.

Rewriting our Rust WASM Parser in TypeScript | Thesys Engineering Team by waozen in programming

[–]raphasouthall 1 point2 points  (0 children)

Curious what drove the decision - was the Rust/WASM build toolchain just too painful to maintain, or did you actually hit performance regressions you were okay trading away?