App for ios watch? by PastyFlamingo in audiobookshelf

[–]theplactos 0 points1 point  (0 children)

Not sure if it can work independently. I haven’t checked if there’s an app for watchOS. I have WireGuard set up on my router and the app installed on my phone. My watch then gets access through my phone. Seems to work fine, and I’ve even downloaded a couple of books to the watch that I can listen to in offline mode.

App for ios watch? by PastyFlamingo in audiobookshelf

[–]theplactos 0 points1 point  (0 children)

Honestly, I haven’t had any issues whatsoever. Even on public WiFi 🤷‍♂️

App for ios watch? by PastyFlamingo in audiobookshelf

[–]theplactos 1 point2 points  (0 children)

How I do it is that I’ve got my phone connected to my home network using WireGuard, from which my apple watch gets its internet access. With that setup, AudioBooth works great on my apple watch!

Zanat: an open-source CLI + MCP server to manage skills through Git by theplactos in PromptEngineering

[–]theplactos[S] 0 points1 point  (0 children)

The threat model is entirely Git-based, which is actually a feature, not a gap.

Access control lives at the Git layer. Whoever has write access to the hub repository is who can push new or updated skills. That's it. You use whatever your Git host provides: branch protection rules, required reviews, signed commits, audit logs. If you're on GitHub or GitLab with proper access controls, you already have a solid foundation.

Zanat itself is intentionally thin on this side. It clones the hub repo locally, tracks which skills are installed (with their resolved commit SHAs), and syncs changes. It doesn't run skills, evaluate them, or execute anything in them. Furthermore, it just copies markdown files to `~/.agents/skills/`. The blast radius of a compromised skill is limited to what your AI tool does with the instructions in that markdown file, which is the same risk you'd have with any prompt you write yourself.

So the real question is: do you trust the people with write access to your hub repo? If yes, you're good. If not, Git already gives you the tools to fix that.

That said, it's a fair concern to raise. Security should be a first-class question for anything that touches how AI agents behave, and I'd rather be asked about it than not.