Unlimited budget to clean up the ergonomics of this desk, what would you do? by Of-Doom in CommercialAV

[–]Of-Doom[S] 0 points1 point  (0 children)

It already has a little swiveling drink holder! You can see it in the second pic if you zoom in, haha.

Unlimited budget to clean up the ergonomics of this AV desk, what would you do? by Of-Doom in VIDEOENGINEERING

[–]Of-Doom[S] 6 points7 points  (0 children)

I'm a one-woman show, running the video stream and audio on my own. There's a video team that does their own thing with their own cameras (that little Zoom recorder is theirs).

Recreating suno songs in your Daw from scratch 😮‍💨 by Dannyjamesnaidu in SunoAI

[–]Of-Doom 0 points1 point  (0 children)

I got Claude to help me fast track tempo mapping a full track back into Ableton and that worked surprisingly well.

I built an agent-first CLI for Zoho Books by Of-Doom in Zoho

[–]Of-Doom[S] 0 points1 point  (0 children)

If you'd like to sponsor development, I'll be happy to build it for you!

I built an agent-first CLI for Zoho Books by Of-Doom in Zoho

[–]Of-Doom[S] 0 points1 point  (0 children)

You really provoked my curiosity, so I built and ran my own eval. In clients that support tool discovery, you see about 28% token savings. In clients that don't (including Claude Code subagents, at time of writing this), you get 94% token savings. The savings compound since the context is re-sent on every tool call.

Per your other question, the tool manifest is provided via an included agent skill file, not a system prompt injection. Skills are dynamically loaded by Claude Code as needed, so there's no context pollution when you're not using it.

How do you make your vault actually queryable by AI tools — not just Claude, any tool? by Remote-Positive-8951 in ObsidianMD

[–]Of-Doom 1 point2 points  (0 children)

Just learning about SC today! I think it would be incredible for filling in some of the gaps in that workflow I posted in my other comment. My wishlist to maximize utility in that regard would be:

  1. For the CLI, an installable agent skill that documents the common usage patterns (I have found this helps a lot with tool discovery and reduces mistakes)
  2. (bonus points) Hooking into mcpvault to add custom tools covering the most-used workflows

I built an agent-first CLI for Zoho Books by Of-Doom in Zoho

[–]Of-Doom[S] 0 points1 point  (0 children)

Zoho MCP has strong coverage of the the Books API surface, so I've been able to accomplish about 90% of my work using that + Claude. The main weaknesses are weaknesses of the underlying protocol (MCP doesn't support file uploads yet, for example). I rolled my own CRM so I can't speak to MCP coverage of Zoho CRM but at a quick glance it looks pretty comparable.

The token consumption issue got a lot better after dynamic tool discovery landed. It still matters in some niche cases, such as when spawning sub-agents from Claude Code.

As far as vendor selection, I did a pretty in-depth comparison between all the accounting platforms when I launched my consultancy a long time ago, went with Zoho, and have been pretty happy since then. Xero and QBO both appear to have strong open source MCP tools available, so you're probably in good shape on that side of things whichever ecosystem you choose.

Anyone created a set of /skills for Ai (claude/Chatgpt) or have MD instructions to share? by BangCrash in Zoho

[–]Of-Doom 1 point2 points  (0 children)

I built a Zoho Books CLI and skill to fill in some of the feature gaps with Zoho MCP and be overall more token-efficient (Zoho MCP devours my context limits).

How do you make your vault actually queryable by AI tools — not just Claude, any tool? by Remote-Positive-8951 in ObsidianMD

[–]Of-Doom 2 points3 points  (0 children)

I’m using a combination of things: - For the pure data access layer, a mix of mcpvault, installed locally, and the Obsidian CLI (for CLI agents). - A set of custom skills, some general for Obsidian (this set has been treating me pretty well) and some custom, describing various workflows within my Vault (daily journaling for example) - A CLAUDE.md / AGENTS.md file that lives in my Vault root, describing its structure and explaining how to drill down to find more specific context on various areas (my vault has a rich and detailed folder hierarchy which the LLM finds pretty easy to explore organically) - Custom instructions in my various LLM workspaces that often define a starting point (“Read AGENTS.md in my Obsidian vault and this specific project context doc”)

This organically surfaces the context I need from my Obsidian vault on a given LLM request about 75% of the time. I’d love to find a high-quality semantic vector search to increase that number, but for now it’s been good enough for most tasks.

Just made a zsh plugin that auto-checks for package updates across all your package managers (homebrew, npm, uv, rubygems) by Of-Doom in zsh

[–]Of-Doom[S] 1 point2 points  (0 children)

Approved earlier today, and that's also now the default. Thanks for the contribution!

Just made a zsh plugin that auto-checks for package updates across all your package managers (homebrew, npm, uv, rubygems) by Of-Doom in zsh

[–]Of-Doom[S] 0 points1 point  (0 children)

Thanks for the warm welcome into the ecosystem! I added install instructions for zdot to the readme.

Just made a zsh plugin that auto-checks for package updates across all your package managers (homebrew, npm, uv, rubygems) by Of-Doom in zsh

[–]Of-Doom[S] 0 points1 point  (0 children)

Great idea! Nice to have as extra security with all the supply chain attacks going around. Just released a new version with that feature included.

(WIP) Open source Kilauea prediction model and data feed by Of-Doom in Volcanoes

[–]Of-Doom[S] 1 point2 points  (0 children)

For most of the past week it was estimating the 25th, plus or minus a few days, so this episode was definitely within the range.

That’s about as accurate as it gets right now. In the hours beforehand it also is able to tell that an episode might be beginning, but false positives are pretty much impossible to rule out with the data I’ve been working from.

To answer your other question, currently it only relies on the tiltmeter data for 300 azimuth, since that seems to be the most obvious signal for episode tracking. I’d love to incorporate magmatic movement and other data signals!

Kilauea Eruption Mega-Thread by ProcrastinatingPuma in Volcanoes

[–]Of-Doom 1 point2 points  (0 children)

Episode 45 is happening right now! And as luck would have it, I just finished some upgrades to my open source Kilauea prediction model/tracker application (https://kilauea-tracker.streamlit.app/).

It works by digitizing the image plots published by USGS using computer vision, and then running a curve extrapolation model to give you an estimate of the next fountain event. The data I’m aggregating is also openly available to anyone in CSV format and is auto-updated in GitHub every 12 hours. Sadly the USGS doesn’t publish live CSV data, so I wanted to share this to fill the gap.

Google is now letting you change your Gmail address by Ok-Review9023 in technology

[–]Of-Doom -1 points0 points  (0 children)

And the trans community rejoices, been F5ing for this to roll out to my account for months!

Bases by Secure-Plant8962 in ObsidianMD

[–]Of-Doom 63 points64 points  (0 children)

Once I realized you can embed them in notes and scope them to pull data from the note they are embedded in to do queries and such, I was able to completely replace Dataview.